OnCommand Insight 63
Hands-on Lab Guide Dave Collins Technical Marketing Engineer August 2012
2 Insert Technical Report Title Here
Contents
1 INTRODUCTION 4
BEGIN LAB 5
2 INVENTORY AND ASSURANCE NAVIGATION AND VIEWS 5
21 INVENTORY 6
Storage Arrays 6
Grouping 6
Switches 7
Paths 8
Analyze Path Violation 8
Virtual Machines and Data store 10
3 ASSURANCE 11
31 APPLYING POLICIES TO MONITOR CONFIGURATION AND PERFORMANCE 11
32 FIBRE CHANNEL POLICY SETTINGS 12
33 VIOLATIONS BROWSER 14
34 PORT BALANCE VIOLATIONS 16
35 DISK UTILIZATION VIOLATIONS 17
4 PERFORMANCE 18
41 STORAGE PERFORMANCE 18
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE) 21
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE 23
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC 24
45 STORAGE TIERING AND ANALYSIS 24
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION 25
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE TROUBLESHOOTING END TO END
PERFORMANCE ISSUES USING ldquoANALYZE PERFORMANCErdquo 26
48 VM PERFORMANCE 29
49 APPLICATION AND HOST PERFORMANCE 29
5 PLANNING TOOLS 30
51 TASK AND ACTION PLANNING AND VALIDATION 30
52 SWITCH MIGRATION TOOL 33
6 DATA WAREHOUSE 35
61 INTRODUCTION AND OVERVIEW 35
62 PLAN - CAPACITY FORECAST DASHBOARD 37
63 TIER DASHBOARD 41
64 ACCOUNTABILITY AND COST AWARENESS 42
3 Insert Technical Report Title Here
65 UNCHARGED STORAGE 45
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE 45
67 DIGGING INTO THE DETAILS 47
68 VM CAPACITY REPORTING 50
7 CREATE AD-HOC REPORT 53
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING BUSINESS INSIGHT
ADVANCED 53
Steps to create this report 53
Adding variable costs per VM to your chargeback report 59
Adding Fixed Overhead costs to your chargeback report 63
Formatting and Grouping the report by Application and Tenant 65
CLEANING up the report and running it 66
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO 69
8 SCHEDULING REPORTS FOR DISTRIBUTION 76
9 ENDING COMMENTS AND FEEDBACK 78
4 Insert Technical Report Title Here
THIS IS A LAB GUIDE FOR ONCOMMAND INSIGHT 63 ASSURE PERFORM PLAN AND DATA WAREHOUSE (DWH) Content Note This lab is designed to help you understand the use cases and how to navigate OnCommand Insight server and DWH to demo them I did not call out each use case specifically but provided a smooth flow to learn how to use and demo OnCommand Insight while picking up the use cases along the way Itrsquos also designed so you donrsquot have to show the whole demo but you can pick out particular areas you may want to show your customer This is not all inclusive nor is it designed to show you all features and functions of OnCommand Insight
However please note that each portion of this lab builds on the past steps so it is assumed that you will learn how to navigate as you go through the lab In other words we stop telling you how to get somewhere but tell you to go there because yoursquove stepped through in previous steps
This is primarily built to run against the current OnCommand Insight (Demo_V63gz dated 3212012) and DWH demo db (UPdated_dwh_Demo_630_with_performancegz dated 06012012) Screenshots shown are accurate to those databases at the time of this publishing Your results may vary depending on your database but the functionality is still the same regardless of the data Total lab time is about 1-4 hours depending on experience Itrsquos a large book but lots of pictures Your feedback is welcome My address is at the end of this document Have Fun
1 INTRODUCTION
Using 100 Agentless technologies OnCommand Insight automatically discovers all of your resources and provides you a complete end-to-end view of your entire service path With OnCommand Insight you are able to see exactly which resources are being used and who is using them You can establish policies based on your best practices enabling OnCommand Insight to monitor and alert on violations that fall outside those policies
OnCommand Insight is a ldquoREAD ONLYrdquo tool that inventories and organizes your storage infrastructure to enable you to easily see all the details related each device and how it relates to the other devices in the SAN and the entire environment OnCommand Insight does not actively manage any component
It is also a powerful tool for modeling and validating planned changes to minimize impact and downtime for consolidations and migrations OnCommand Insight can be used to identify candidates for virtualization and tiering
OnCommand Insight correlates discovered resources to business applications enabling you to optimize your resources and better align them with business requirements You can use this valuable information to help you reclaim orphaned storage and re-tier resources to get the most out of your current investments
OnCommand Insight provides trending forecasting and reporting for capacity management
OnCommand Insight enables you to apply your business entities for reporting on usage by business unit application data center and tenants OnCommand Insight provides user accountability and cost awareness enabling you generate automated chargeback reporting by business unit and applications
You can download a full customer facing demo video from
OnCommand 62 Full demo with Table of Contents for easy viewing (For best viewing download and unzip the zip file and play the HTML doc from your local drive)
5 Insert Technical Report Title Here
httpscommunitiesnetappcomdocsDOC-14031
BEGIN LAB
2 INVENTORY AND ASSURANCE NAVIGATION AND VIEWS
Letrsquos take a look at the discovery and inventory of storage switches VM systems etc
Log onto OnCommand Insight by selecting the OnCommand Insight ICON from the desktop
Sign-on using Adminadmin123
From the main screen select the dropdown ICON that looks like a calendar
at the top left corner of the screen and select InventorygtHosts You should see hosts from the demo db
You can use this menu to select Inventory Assurance Performance or Planning categories OR to make it easier to navigate you can activate the Navigation Pane as follows (recommend you use the Navigation Pane as itrsquos easier when you start)
o Select ToolsgtOnCommand Insight Settings from the menu bar
o Check the ldquoUse Navigation Panerdquo box about half way down the right panel of the settings box and select OK
o Notice the full menu bar down the left side of the screen This will make it easier for you to navigate
6 Insert Technical Report Title Here
21 INVENTORY
Expand the Inventory menu by clicking on the Inventory Button on the Navigation Pane as shown below
STORAGE ARRAYS
o Select Storage Arrays from that menu This opens up the MAIN VIEW of storage
o Select the NetApp array called Chicago o As you see Insight provides a full inventory including family model number and
serial number and other elements USE the scroll bar to see more columns to the right
o Note the Icons across the bottom of the screen These are Microviews You can toggle them on and off to select more detail about what you have selected in the main view
o Cycle through the micro views to get an idea of what is there o Select Internal Volume and Volume Microviews (as shown above) o You can select the Column Management ICON to add and subtract more
columns from the view Question Which microview can you add the DeDupe savings column
GROUPING
You can group the information in any of the table views in main and micro views by selecting the dropdown in the upper right corner of the table view and selecting a grouping
o Group the storage devices by model number This enables you to see all of
7 Insert Technical Report Title Here
your arrays grouped by model number o Re-group the main view to NO Grouping
Select the SEARCH icon from the top of the Main View In the search window below the main table type NTAP and see the array selected You can step through looking for next or last occurrence of the word NTAP Close the search window by clicking the X in the red box on the left
SWITCHES
Using the same navigation select switches from the Inventory menu (no pictures here) View the switches and get the names IP addresses model numbers and firmware in the main view View the ports and zone members and all other elements using the micro views You can use the Inventory menu to provide you with views of hosts storage switches VMs etc Letrsquos look closer at paths
Search
8 Insert Technical Report Title Here
PATHS
OnCommand Insight then correlates the paths The paths are the logical and physical elements that make up the service path between the application VM and or host and its storage Paths include Fibre Channel Cloud iSCSI NFS Block NAS etc Now letrsquos set up the views shown here and detailed steps below
Select Paths from the Inventory menu
Group by Host then Storage As you can see here from the fiber Channel prospective when we select a path were looking at the exact paths between the host and it storage through the redundant fibre connections
Select Topology micro view icon from the bottom of the Main view
Use Filter to find host kc_ordev7 by mousing over the top of the Host column until you see a funnel Then type kc in the Host column top cell
Expand kc_Ordev7 and the array ports in the main view
Select the RED Violation port of kc_ordev7
Select Zoning Masking Port micro views
In this particular case you can see that you have one green path which is good across the fiber channel but the other path is blue See legend on right of topology WHAT IS NOT CONFIGURED (Hint Blue and yellow make green)
ANALYZE PATH VIOLATION
OK letrsquos analyze the violation
Topology microview
Good Path
Bad Path
Group by
9 Insert Technical Report Title Here
There are a couple ways you can analyze the violations You can select violation micro view from bottom icons or you can simply right click and select ldquoAnalyze Violationrdquo from the Red Path for kc_ordev7 in the main view This opens a root cause analysis screen below
o Expand the violation and change information in the lower pane What is the cause of this violation
The violation tells you that the number ports this change from 2 to 1 which went against the Redundancy Policy and cause violation If you look down at the last changes that occurred you can see that masking was removed which denied access from that host through its HBA
Right Click and select gtgtgtgtgt
10 Insert Technical Report Title Here
to the FA port on the array and all the way to the volume To fix the violation the administrator needs to reverse those changes and that violation automatically resolves itself
Close the Analyze Violation screen
VIRTUAL MACHINES AND DATA STORE
OnCommand Insight gives you a complete inventory of the VM environment This is discovered through a Data source configured to talk with Virtual Center Details include all internal configurations and elements of the VMs ESX hosts the technologies as well as all the information to correlate the path from VMs to its storage and details about how that storage is configured to the disk including performance (if you have OnCommand Insight Perform License installed) Wersquoll discuss performance later in this demo Select Datastore from the Inventory menu
Scroll down the main view and select DS-30 in the Main view
Display the Topology and Virtual Machine micro views to show which VMs are using Datastore DS-30
Toggle through the micro view icons below to show the details of VM VMDK capacities the storage the backend storage in the arrays and resources You can see full end to end visibility from the Host through a virtualized storage environment to the backend storage
Note You can also select Hosts and Virtual Machines from the Inventory Menu and cycle through the microviews noting the end to end visibility
Donrsquot forget to use the Microview ICONS
11 Insert Technical Report Title Here
3 ASSURANCE
31 APPLYING POLICIES TO MONITOR CONFIGURATION AND PERFORMANCE
Now that weve gathered the inventory and pull all this information into the Database lets start to apply policies so we can monitor and get alerts on violations Wersquoll talk about setting global policies changes and the new violation browser Wersquoll show how we can analyze performance talk about some port balance violations and disk utilization violations Initially we set global policies within OnCommand Insight so we can monitor the environment alert us when something falls outside those policies There are several policies available
Select Policy from the top menu bar
Select Global Polices
What thresholds can you set from here
Select Violation Severity from the left menu of the global policy window
12 Insert Technical Report Title Here
What Severities can you set for each threshold
Select Violation Notification What are the possible violation notification options
32 FIBRE CHANNEL POLICY SETTINGS
We set fibre channel policies to determine and keep track of path redundancy We can set options such as no SPF (single point of failure) or redundant We can set it for minimum number of ports on the host the storage and the maximum number switch hops We can set exceptions around different volume types that would not necessarily require redundancy like BCVs R1 and R2 We can also set policy exceptions around smaller volumes it wouldnt have redundancy like EMC gatekeepers
From the Policy menu on the menu bar Select Fibre Channel Policy
13 Insert Technical Report Title Here
What type of redundancy can you set from here
What is the default number of ports Volume Type Exceptions
What Volume exemptions can you select
You can set redundancy policies on physical storage behind a virtualizer (Backend Path)
14 Insert Technical Report Title Here
33 VIOLATIONS BROWSER
Letrsquos take a look at the violation browser The Violations Browser allows us to see the impact of the violations on your business elements in one place and helps us manage the violations on all of those global policies you saw above
From the Assurance Menu on the left select Violations Browser (Note At this point you might want to increase your viewing real estate here by closing down the navigation pane on the left To close the navigation pane go to ToolsgtOnCommand Insight Setting and uncheck the Navigation Pane box You can use the same process to turn it back on later)
Back in the Violations browser expand the All Violations explorer to reveal the violation categories This shows violations like Datastore Latency Disk Utilization Volume and internal volume IOPS and response times Port Balance violations etc These should look familiar to you from the global policies that we just reviewed a few minutes ago
As you can see you can look at the violations all pile here as you can see over 12000 violations (NOTE Donrsquot let this scare you Usually most violations are caused by events that create multiple violations per event You fix one event and a bunch of these go away) We can see detail on each of these violations by performing the following
15 Insert Technical Report Title Here
Select the show violations impact icon to view all the violations in context by business entity application host virtual machine datacenter etc
Expand the Impacted Business Entity explorer and drill down to Earth Thermal Tracking
Expand and select Disk Utilization
Sort the Description column descending Now you can see Element Description Severity and Violation type
Select the top violation element called Disk DISK-14 of Storage Virtualizer
In the Impact Details microview toggle the Host Virtual Machines Applications and Business Entities and Storage icons to view the details of the impact of the violations Here we see the impact on one application called City Limits owned by one business entity called Green CorpgtAlternate EnergygtGeothermalgtEarth Thermal Tracking on one host However 10 virtual machines are affected by this violation from one array
The chart in the Violation Event microview shows the history and trending of the utilization on this one disk over time From here we can analyze the performance details as wersquoll see later in this demo REVIEW
What are the categories that show impact of violations
What Business Entity is impacted by these violations
What is the Utilization of this disk
Which hosts are being affected by this violation
Which VMs are being affected by the violation
Wersquoll do some troubleshooting using these violations and Analyze Performance later
16 Insert Technical Report Title Here
34 PORT BALANCE VIOLATIONS
Letrsquos take a look at port balance violations These are violations showing imbalance on SAN traffic from hosts Arrays and Switches These are not performance related violations
Using either the Navigation Pane or dropdown menu Open Assurance
Select Port Balance Violations
Group by Type then Device
Expand Hosts and sort the Device column ascending
Select device Host nj_exch002 Note that this host has a balance index of 81 That means the difference in distribution of traffic (or the load) between the HBArsquos on this host Any index over 50 indicates significantly unbalanced ports on a device
Select the Switch Port Performance microview Note that over 88 of the traffic distribution is going across one HBA in only 11 of the traffic going across the other A failure on the heavily used HBA could choke that application This could indicate that port balancing software is not configured on this server not configured at all or not installed
Now collapse the Hosts and expand the Storage devices
Select various storage devices and view the traffic distribution in the switch port performance microview to understand the balance across the storage ports
These port balance violations provide valuable data on how your environment is configure and optimized It allows you to quickly determine where you need to optimize your configurations based on actual usage These are balance violations within each device not necessarily traffic related performance violations Wersquoll look at performance a bit later
17 Insert Technical Report Title Here
35 DISK UTILIZATION VIOLATIONS
We looked Disk Utilization Violations in the Violations Browser a few minutes ago because we got alerted to a violation But if you didnt look at the error you can go directly to disk utilization violations here and troubleshoot your issues the similar to how we did in the Violations Browser The difference is the Violations Browser breaks down the violations by how it impacts your Business Entities Applications Data Centers etc so you can troubleshoot by business priorities while Disk Utilization Violations lets you easily see your most critical utilization issues and troubleshoot from the disk utilization violation back to the hosts You can also add columns to show applications and Business Entities if you want
Letrsquos take a look how you can use Disk Utilization Violations to quickly identify and drilldown to where your issue is
Select Disk Utilization Violations from the Assurance Menu
Sort the Utilization column descending to bring your heaviest utilization to the top
Here you see the utilization of each disk that exceeded the Disk Utilization threshold we
set earlier in our Global Policy In relation to each violation you see the disk array
hosts that access this disk the date and time the violation occurred the percentage of
utilization as well as IOPS and Throughput
Select the disk with the highest utilization
Now select the Volume Usage of Disk microview to get details on volume usage and
performance
Sort the Disk IOPS descending and select the top Volume Usage of Disk Here I see
the volume with the highest usage along with the disk throughput and percentage info by
volume and host
Exceeded Threshold
Host with highest IOPS
Switch Traffic OK
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
2 Insert Technical Report Title Here
Contents
1 INTRODUCTION 4
BEGIN LAB 5
2 INVENTORY AND ASSURANCE NAVIGATION AND VIEWS 5
21 INVENTORY 6
Storage Arrays 6
Grouping 6
Switches 7
Paths 8
Analyze Path Violation 8
Virtual Machines and Data store 10
3 ASSURANCE 11
31 APPLYING POLICIES TO MONITOR CONFIGURATION AND PERFORMANCE 11
32 FIBRE CHANNEL POLICY SETTINGS 12
33 VIOLATIONS BROWSER 14
34 PORT BALANCE VIOLATIONS 16
35 DISK UTILIZATION VIOLATIONS 17
4 PERFORMANCE 18
41 STORAGE PERFORMANCE 18
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE) 21
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE 23
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC 24
45 STORAGE TIERING AND ANALYSIS 24
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION 25
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE TROUBLESHOOTING END TO END
PERFORMANCE ISSUES USING ldquoANALYZE PERFORMANCErdquo 26
48 VM PERFORMANCE 29
49 APPLICATION AND HOST PERFORMANCE 29
5 PLANNING TOOLS 30
51 TASK AND ACTION PLANNING AND VALIDATION 30
52 SWITCH MIGRATION TOOL 33
6 DATA WAREHOUSE 35
61 INTRODUCTION AND OVERVIEW 35
62 PLAN - CAPACITY FORECAST DASHBOARD 37
63 TIER DASHBOARD 41
64 ACCOUNTABILITY AND COST AWARENESS 42
3 Insert Technical Report Title Here
65 UNCHARGED STORAGE 45
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE 45
67 DIGGING INTO THE DETAILS 47
68 VM CAPACITY REPORTING 50
7 CREATE AD-HOC REPORT 53
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING BUSINESS INSIGHT
ADVANCED 53
Steps to create this report 53
Adding variable costs per VM to your chargeback report 59
Adding Fixed Overhead costs to your chargeback report 63
Formatting and Grouping the report by Application and Tenant 65
CLEANING up the report and running it 66
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO 69
8 SCHEDULING REPORTS FOR DISTRIBUTION 76
9 ENDING COMMENTS AND FEEDBACK 78
4 Insert Technical Report Title Here
THIS IS A LAB GUIDE FOR ONCOMMAND INSIGHT 63 ASSURE PERFORM PLAN AND DATA WAREHOUSE (DWH) Content Note This lab is designed to help you understand the use cases and how to navigate OnCommand Insight server and DWH to demo them I did not call out each use case specifically but provided a smooth flow to learn how to use and demo OnCommand Insight while picking up the use cases along the way Itrsquos also designed so you donrsquot have to show the whole demo but you can pick out particular areas you may want to show your customer This is not all inclusive nor is it designed to show you all features and functions of OnCommand Insight
However please note that each portion of this lab builds on the past steps so it is assumed that you will learn how to navigate as you go through the lab In other words we stop telling you how to get somewhere but tell you to go there because yoursquove stepped through in previous steps
This is primarily built to run against the current OnCommand Insight (Demo_V63gz dated 3212012) and DWH demo db (UPdated_dwh_Demo_630_with_performancegz dated 06012012) Screenshots shown are accurate to those databases at the time of this publishing Your results may vary depending on your database but the functionality is still the same regardless of the data Total lab time is about 1-4 hours depending on experience Itrsquos a large book but lots of pictures Your feedback is welcome My address is at the end of this document Have Fun
1 INTRODUCTION
Using 100 Agentless technologies OnCommand Insight automatically discovers all of your resources and provides you a complete end-to-end view of your entire service path With OnCommand Insight you are able to see exactly which resources are being used and who is using them You can establish policies based on your best practices enabling OnCommand Insight to monitor and alert on violations that fall outside those policies
OnCommand Insight is a ldquoREAD ONLYrdquo tool that inventories and organizes your storage infrastructure to enable you to easily see all the details related each device and how it relates to the other devices in the SAN and the entire environment OnCommand Insight does not actively manage any component
It is also a powerful tool for modeling and validating planned changes to minimize impact and downtime for consolidations and migrations OnCommand Insight can be used to identify candidates for virtualization and tiering
OnCommand Insight correlates discovered resources to business applications enabling you to optimize your resources and better align them with business requirements You can use this valuable information to help you reclaim orphaned storage and re-tier resources to get the most out of your current investments
OnCommand Insight provides trending forecasting and reporting for capacity management
OnCommand Insight enables you to apply your business entities for reporting on usage by business unit application data center and tenants OnCommand Insight provides user accountability and cost awareness enabling you generate automated chargeback reporting by business unit and applications
You can download a full customer facing demo video from
OnCommand 62 Full demo with Table of Contents for easy viewing (For best viewing download and unzip the zip file and play the HTML doc from your local drive)
5 Insert Technical Report Title Here
httpscommunitiesnetappcomdocsDOC-14031
BEGIN LAB
2 INVENTORY AND ASSURANCE NAVIGATION AND VIEWS
Letrsquos take a look at the discovery and inventory of storage switches VM systems etc
Log onto OnCommand Insight by selecting the OnCommand Insight ICON from the desktop
Sign-on using Adminadmin123
From the main screen select the dropdown ICON that looks like a calendar
at the top left corner of the screen and select InventorygtHosts You should see hosts from the demo db
You can use this menu to select Inventory Assurance Performance or Planning categories OR to make it easier to navigate you can activate the Navigation Pane as follows (recommend you use the Navigation Pane as itrsquos easier when you start)
o Select ToolsgtOnCommand Insight Settings from the menu bar
o Check the ldquoUse Navigation Panerdquo box about half way down the right panel of the settings box and select OK
o Notice the full menu bar down the left side of the screen This will make it easier for you to navigate
6 Insert Technical Report Title Here
21 INVENTORY
Expand the Inventory menu by clicking on the Inventory Button on the Navigation Pane as shown below
STORAGE ARRAYS
o Select Storage Arrays from that menu This opens up the MAIN VIEW of storage
o Select the NetApp array called Chicago o As you see Insight provides a full inventory including family model number and
serial number and other elements USE the scroll bar to see more columns to the right
o Note the Icons across the bottom of the screen These are Microviews You can toggle them on and off to select more detail about what you have selected in the main view
o Cycle through the micro views to get an idea of what is there o Select Internal Volume and Volume Microviews (as shown above) o You can select the Column Management ICON to add and subtract more
columns from the view Question Which microview can you add the DeDupe savings column
GROUPING
You can group the information in any of the table views in main and micro views by selecting the dropdown in the upper right corner of the table view and selecting a grouping
o Group the storage devices by model number This enables you to see all of
7 Insert Technical Report Title Here
your arrays grouped by model number o Re-group the main view to NO Grouping
Select the SEARCH icon from the top of the Main View In the search window below the main table type NTAP and see the array selected You can step through looking for next or last occurrence of the word NTAP Close the search window by clicking the X in the red box on the left
SWITCHES
Using the same navigation select switches from the Inventory menu (no pictures here) View the switches and get the names IP addresses model numbers and firmware in the main view View the ports and zone members and all other elements using the micro views You can use the Inventory menu to provide you with views of hosts storage switches VMs etc Letrsquos look closer at paths
Search
8 Insert Technical Report Title Here
PATHS
OnCommand Insight then correlates the paths The paths are the logical and physical elements that make up the service path between the application VM and or host and its storage Paths include Fibre Channel Cloud iSCSI NFS Block NAS etc Now letrsquos set up the views shown here and detailed steps below
Select Paths from the Inventory menu
Group by Host then Storage As you can see here from the fiber Channel prospective when we select a path were looking at the exact paths between the host and it storage through the redundant fibre connections
Select Topology micro view icon from the bottom of the Main view
Use Filter to find host kc_ordev7 by mousing over the top of the Host column until you see a funnel Then type kc in the Host column top cell
Expand kc_Ordev7 and the array ports in the main view
Select the RED Violation port of kc_ordev7
Select Zoning Masking Port micro views
In this particular case you can see that you have one green path which is good across the fiber channel but the other path is blue See legend on right of topology WHAT IS NOT CONFIGURED (Hint Blue and yellow make green)
ANALYZE PATH VIOLATION
OK letrsquos analyze the violation
Topology microview
Good Path
Bad Path
Group by
9 Insert Technical Report Title Here
There are a couple ways you can analyze the violations You can select violation micro view from bottom icons or you can simply right click and select ldquoAnalyze Violationrdquo from the Red Path for kc_ordev7 in the main view This opens a root cause analysis screen below
o Expand the violation and change information in the lower pane What is the cause of this violation
The violation tells you that the number ports this change from 2 to 1 which went against the Redundancy Policy and cause violation If you look down at the last changes that occurred you can see that masking was removed which denied access from that host through its HBA
Right Click and select gtgtgtgtgt
10 Insert Technical Report Title Here
to the FA port on the array and all the way to the volume To fix the violation the administrator needs to reverse those changes and that violation automatically resolves itself
Close the Analyze Violation screen
VIRTUAL MACHINES AND DATA STORE
OnCommand Insight gives you a complete inventory of the VM environment This is discovered through a Data source configured to talk with Virtual Center Details include all internal configurations and elements of the VMs ESX hosts the technologies as well as all the information to correlate the path from VMs to its storage and details about how that storage is configured to the disk including performance (if you have OnCommand Insight Perform License installed) Wersquoll discuss performance later in this demo Select Datastore from the Inventory menu
Scroll down the main view and select DS-30 in the Main view
Display the Topology and Virtual Machine micro views to show which VMs are using Datastore DS-30
Toggle through the micro view icons below to show the details of VM VMDK capacities the storage the backend storage in the arrays and resources You can see full end to end visibility from the Host through a virtualized storage environment to the backend storage
Note You can also select Hosts and Virtual Machines from the Inventory Menu and cycle through the microviews noting the end to end visibility
Donrsquot forget to use the Microview ICONS
11 Insert Technical Report Title Here
3 ASSURANCE
31 APPLYING POLICIES TO MONITOR CONFIGURATION AND PERFORMANCE
Now that weve gathered the inventory and pull all this information into the Database lets start to apply policies so we can monitor and get alerts on violations Wersquoll talk about setting global policies changes and the new violation browser Wersquoll show how we can analyze performance talk about some port balance violations and disk utilization violations Initially we set global policies within OnCommand Insight so we can monitor the environment alert us when something falls outside those policies There are several policies available
Select Policy from the top menu bar
Select Global Polices
What thresholds can you set from here
Select Violation Severity from the left menu of the global policy window
12 Insert Technical Report Title Here
What Severities can you set for each threshold
Select Violation Notification What are the possible violation notification options
32 FIBRE CHANNEL POLICY SETTINGS
We set fibre channel policies to determine and keep track of path redundancy We can set options such as no SPF (single point of failure) or redundant We can set it for minimum number of ports on the host the storage and the maximum number switch hops We can set exceptions around different volume types that would not necessarily require redundancy like BCVs R1 and R2 We can also set policy exceptions around smaller volumes it wouldnt have redundancy like EMC gatekeepers
From the Policy menu on the menu bar Select Fibre Channel Policy
13 Insert Technical Report Title Here
What type of redundancy can you set from here
What is the default number of ports Volume Type Exceptions
What Volume exemptions can you select
You can set redundancy policies on physical storage behind a virtualizer (Backend Path)
14 Insert Technical Report Title Here
33 VIOLATIONS BROWSER
Letrsquos take a look at the violation browser The Violations Browser allows us to see the impact of the violations on your business elements in one place and helps us manage the violations on all of those global policies you saw above
From the Assurance Menu on the left select Violations Browser (Note At this point you might want to increase your viewing real estate here by closing down the navigation pane on the left To close the navigation pane go to ToolsgtOnCommand Insight Setting and uncheck the Navigation Pane box You can use the same process to turn it back on later)
Back in the Violations browser expand the All Violations explorer to reveal the violation categories This shows violations like Datastore Latency Disk Utilization Volume and internal volume IOPS and response times Port Balance violations etc These should look familiar to you from the global policies that we just reviewed a few minutes ago
As you can see you can look at the violations all pile here as you can see over 12000 violations (NOTE Donrsquot let this scare you Usually most violations are caused by events that create multiple violations per event You fix one event and a bunch of these go away) We can see detail on each of these violations by performing the following
15 Insert Technical Report Title Here
Select the show violations impact icon to view all the violations in context by business entity application host virtual machine datacenter etc
Expand the Impacted Business Entity explorer and drill down to Earth Thermal Tracking
Expand and select Disk Utilization
Sort the Description column descending Now you can see Element Description Severity and Violation type
Select the top violation element called Disk DISK-14 of Storage Virtualizer
In the Impact Details microview toggle the Host Virtual Machines Applications and Business Entities and Storage icons to view the details of the impact of the violations Here we see the impact on one application called City Limits owned by one business entity called Green CorpgtAlternate EnergygtGeothermalgtEarth Thermal Tracking on one host However 10 virtual machines are affected by this violation from one array
The chart in the Violation Event microview shows the history and trending of the utilization on this one disk over time From here we can analyze the performance details as wersquoll see later in this demo REVIEW
What are the categories that show impact of violations
What Business Entity is impacted by these violations
What is the Utilization of this disk
Which hosts are being affected by this violation
Which VMs are being affected by the violation
Wersquoll do some troubleshooting using these violations and Analyze Performance later
16 Insert Technical Report Title Here
34 PORT BALANCE VIOLATIONS
Letrsquos take a look at port balance violations These are violations showing imbalance on SAN traffic from hosts Arrays and Switches These are not performance related violations
Using either the Navigation Pane or dropdown menu Open Assurance
Select Port Balance Violations
Group by Type then Device
Expand Hosts and sort the Device column ascending
Select device Host nj_exch002 Note that this host has a balance index of 81 That means the difference in distribution of traffic (or the load) between the HBArsquos on this host Any index over 50 indicates significantly unbalanced ports on a device
Select the Switch Port Performance microview Note that over 88 of the traffic distribution is going across one HBA in only 11 of the traffic going across the other A failure on the heavily used HBA could choke that application This could indicate that port balancing software is not configured on this server not configured at all or not installed
Now collapse the Hosts and expand the Storage devices
Select various storage devices and view the traffic distribution in the switch port performance microview to understand the balance across the storage ports
These port balance violations provide valuable data on how your environment is configure and optimized It allows you to quickly determine where you need to optimize your configurations based on actual usage These are balance violations within each device not necessarily traffic related performance violations Wersquoll look at performance a bit later
17 Insert Technical Report Title Here
35 DISK UTILIZATION VIOLATIONS
We looked Disk Utilization Violations in the Violations Browser a few minutes ago because we got alerted to a violation But if you didnt look at the error you can go directly to disk utilization violations here and troubleshoot your issues the similar to how we did in the Violations Browser The difference is the Violations Browser breaks down the violations by how it impacts your Business Entities Applications Data Centers etc so you can troubleshoot by business priorities while Disk Utilization Violations lets you easily see your most critical utilization issues and troubleshoot from the disk utilization violation back to the hosts You can also add columns to show applications and Business Entities if you want
Letrsquos take a look how you can use Disk Utilization Violations to quickly identify and drilldown to where your issue is
Select Disk Utilization Violations from the Assurance Menu
Sort the Utilization column descending to bring your heaviest utilization to the top
Here you see the utilization of each disk that exceeded the Disk Utilization threshold we
set earlier in our Global Policy In relation to each violation you see the disk array
hosts that access this disk the date and time the violation occurred the percentage of
utilization as well as IOPS and Throughput
Select the disk with the highest utilization
Now select the Volume Usage of Disk microview to get details on volume usage and
performance
Sort the Disk IOPS descending and select the top Volume Usage of Disk Here I see
the volume with the highest usage along with the disk throughput and percentage info by
volume and host
Exceeded Threshold
Host with highest IOPS
Switch Traffic OK
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
3 Insert Technical Report Title Here
65 UNCHARGED STORAGE 45
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE 45
67 DIGGING INTO THE DETAILS 47
68 VM CAPACITY REPORTING 50
7 CREATE AD-HOC REPORT 53
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING BUSINESS INSIGHT
ADVANCED 53
Steps to create this report 53
Adding variable costs per VM to your chargeback report 59
Adding Fixed Overhead costs to your chargeback report 63
Formatting and Grouping the report by Application and Tenant 65
CLEANING up the report and running it 66
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO 69
8 SCHEDULING REPORTS FOR DISTRIBUTION 76
9 ENDING COMMENTS AND FEEDBACK 78
4 Insert Technical Report Title Here
THIS IS A LAB GUIDE FOR ONCOMMAND INSIGHT 63 ASSURE PERFORM PLAN AND DATA WAREHOUSE (DWH) Content Note This lab is designed to help you understand the use cases and how to navigate OnCommand Insight server and DWH to demo them I did not call out each use case specifically but provided a smooth flow to learn how to use and demo OnCommand Insight while picking up the use cases along the way Itrsquos also designed so you donrsquot have to show the whole demo but you can pick out particular areas you may want to show your customer This is not all inclusive nor is it designed to show you all features and functions of OnCommand Insight
However please note that each portion of this lab builds on the past steps so it is assumed that you will learn how to navigate as you go through the lab In other words we stop telling you how to get somewhere but tell you to go there because yoursquove stepped through in previous steps
This is primarily built to run against the current OnCommand Insight (Demo_V63gz dated 3212012) and DWH demo db (UPdated_dwh_Demo_630_with_performancegz dated 06012012) Screenshots shown are accurate to those databases at the time of this publishing Your results may vary depending on your database but the functionality is still the same regardless of the data Total lab time is about 1-4 hours depending on experience Itrsquos a large book but lots of pictures Your feedback is welcome My address is at the end of this document Have Fun
1 INTRODUCTION
Using 100 Agentless technologies OnCommand Insight automatically discovers all of your resources and provides you a complete end-to-end view of your entire service path With OnCommand Insight you are able to see exactly which resources are being used and who is using them You can establish policies based on your best practices enabling OnCommand Insight to monitor and alert on violations that fall outside those policies
OnCommand Insight is a ldquoREAD ONLYrdquo tool that inventories and organizes your storage infrastructure to enable you to easily see all the details related each device and how it relates to the other devices in the SAN and the entire environment OnCommand Insight does not actively manage any component
It is also a powerful tool for modeling and validating planned changes to minimize impact and downtime for consolidations and migrations OnCommand Insight can be used to identify candidates for virtualization and tiering
OnCommand Insight correlates discovered resources to business applications enabling you to optimize your resources and better align them with business requirements You can use this valuable information to help you reclaim orphaned storage and re-tier resources to get the most out of your current investments
OnCommand Insight provides trending forecasting and reporting for capacity management
OnCommand Insight enables you to apply your business entities for reporting on usage by business unit application data center and tenants OnCommand Insight provides user accountability and cost awareness enabling you generate automated chargeback reporting by business unit and applications
You can download a full customer facing demo video from
OnCommand 62 Full demo with Table of Contents for easy viewing (For best viewing download and unzip the zip file and play the HTML doc from your local drive)
5 Insert Technical Report Title Here
httpscommunitiesnetappcomdocsDOC-14031
BEGIN LAB
2 INVENTORY AND ASSURANCE NAVIGATION AND VIEWS
Letrsquos take a look at the discovery and inventory of storage switches VM systems etc
Log onto OnCommand Insight by selecting the OnCommand Insight ICON from the desktop
Sign-on using Adminadmin123
From the main screen select the dropdown ICON that looks like a calendar
at the top left corner of the screen and select InventorygtHosts You should see hosts from the demo db
You can use this menu to select Inventory Assurance Performance or Planning categories OR to make it easier to navigate you can activate the Navigation Pane as follows (recommend you use the Navigation Pane as itrsquos easier when you start)
o Select ToolsgtOnCommand Insight Settings from the menu bar
o Check the ldquoUse Navigation Panerdquo box about half way down the right panel of the settings box and select OK
o Notice the full menu bar down the left side of the screen This will make it easier for you to navigate
6 Insert Technical Report Title Here
21 INVENTORY
Expand the Inventory menu by clicking on the Inventory Button on the Navigation Pane as shown below
STORAGE ARRAYS
o Select Storage Arrays from that menu This opens up the MAIN VIEW of storage
o Select the NetApp array called Chicago o As you see Insight provides a full inventory including family model number and
serial number and other elements USE the scroll bar to see more columns to the right
o Note the Icons across the bottom of the screen These are Microviews You can toggle them on and off to select more detail about what you have selected in the main view
o Cycle through the micro views to get an idea of what is there o Select Internal Volume and Volume Microviews (as shown above) o You can select the Column Management ICON to add and subtract more
columns from the view Question Which microview can you add the DeDupe savings column
GROUPING
You can group the information in any of the table views in main and micro views by selecting the dropdown in the upper right corner of the table view and selecting a grouping
o Group the storage devices by model number This enables you to see all of
7 Insert Technical Report Title Here
your arrays grouped by model number o Re-group the main view to NO Grouping
Select the SEARCH icon from the top of the Main View In the search window below the main table type NTAP and see the array selected You can step through looking for next or last occurrence of the word NTAP Close the search window by clicking the X in the red box on the left
SWITCHES
Using the same navigation select switches from the Inventory menu (no pictures here) View the switches and get the names IP addresses model numbers and firmware in the main view View the ports and zone members and all other elements using the micro views You can use the Inventory menu to provide you with views of hosts storage switches VMs etc Letrsquos look closer at paths
Search
8 Insert Technical Report Title Here
PATHS
OnCommand Insight then correlates the paths The paths are the logical and physical elements that make up the service path between the application VM and or host and its storage Paths include Fibre Channel Cloud iSCSI NFS Block NAS etc Now letrsquos set up the views shown here and detailed steps below
Select Paths from the Inventory menu
Group by Host then Storage As you can see here from the fiber Channel prospective when we select a path were looking at the exact paths between the host and it storage through the redundant fibre connections
Select Topology micro view icon from the bottom of the Main view
Use Filter to find host kc_ordev7 by mousing over the top of the Host column until you see a funnel Then type kc in the Host column top cell
Expand kc_Ordev7 and the array ports in the main view
Select the RED Violation port of kc_ordev7
Select Zoning Masking Port micro views
In this particular case you can see that you have one green path which is good across the fiber channel but the other path is blue See legend on right of topology WHAT IS NOT CONFIGURED (Hint Blue and yellow make green)
ANALYZE PATH VIOLATION
OK letrsquos analyze the violation
Topology microview
Good Path
Bad Path
Group by
9 Insert Technical Report Title Here
There are a couple ways you can analyze the violations You can select violation micro view from bottom icons or you can simply right click and select ldquoAnalyze Violationrdquo from the Red Path for kc_ordev7 in the main view This opens a root cause analysis screen below
o Expand the violation and change information in the lower pane What is the cause of this violation
The violation tells you that the number ports this change from 2 to 1 which went against the Redundancy Policy and cause violation If you look down at the last changes that occurred you can see that masking was removed which denied access from that host through its HBA
Right Click and select gtgtgtgtgt
10 Insert Technical Report Title Here
to the FA port on the array and all the way to the volume To fix the violation the administrator needs to reverse those changes and that violation automatically resolves itself
Close the Analyze Violation screen
VIRTUAL MACHINES AND DATA STORE
OnCommand Insight gives you a complete inventory of the VM environment This is discovered through a Data source configured to talk with Virtual Center Details include all internal configurations and elements of the VMs ESX hosts the technologies as well as all the information to correlate the path from VMs to its storage and details about how that storage is configured to the disk including performance (if you have OnCommand Insight Perform License installed) Wersquoll discuss performance later in this demo Select Datastore from the Inventory menu
Scroll down the main view and select DS-30 in the Main view
Display the Topology and Virtual Machine micro views to show which VMs are using Datastore DS-30
Toggle through the micro view icons below to show the details of VM VMDK capacities the storage the backend storage in the arrays and resources You can see full end to end visibility from the Host through a virtualized storage environment to the backend storage
Note You can also select Hosts and Virtual Machines from the Inventory Menu and cycle through the microviews noting the end to end visibility
Donrsquot forget to use the Microview ICONS
11 Insert Technical Report Title Here
3 ASSURANCE
31 APPLYING POLICIES TO MONITOR CONFIGURATION AND PERFORMANCE
Now that weve gathered the inventory and pull all this information into the Database lets start to apply policies so we can monitor and get alerts on violations Wersquoll talk about setting global policies changes and the new violation browser Wersquoll show how we can analyze performance talk about some port balance violations and disk utilization violations Initially we set global policies within OnCommand Insight so we can monitor the environment alert us when something falls outside those policies There are several policies available
Select Policy from the top menu bar
Select Global Polices
What thresholds can you set from here
Select Violation Severity from the left menu of the global policy window
12 Insert Technical Report Title Here
What Severities can you set for each threshold
Select Violation Notification What are the possible violation notification options
32 FIBRE CHANNEL POLICY SETTINGS
We set fibre channel policies to determine and keep track of path redundancy We can set options such as no SPF (single point of failure) or redundant We can set it for minimum number of ports on the host the storage and the maximum number switch hops We can set exceptions around different volume types that would not necessarily require redundancy like BCVs R1 and R2 We can also set policy exceptions around smaller volumes it wouldnt have redundancy like EMC gatekeepers
From the Policy menu on the menu bar Select Fibre Channel Policy
13 Insert Technical Report Title Here
What type of redundancy can you set from here
What is the default number of ports Volume Type Exceptions
What Volume exemptions can you select
You can set redundancy policies on physical storage behind a virtualizer (Backend Path)
14 Insert Technical Report Title Here
33 VIOLATIONS BROWSER
Letrsquos take a look at the violation browser The Violations Browser allows us to see the impact of the violations on your business elements in one place and helps us manage the violations on all of those global policies you saw above
From the Assurance Menu on the left select Violations Browser (Note At this point you might want to increase your viewing real estate here by closing down the navigation pane on the left To close the navigation pane go to ToolsgtOnCommand Insight Setting and uncheck the Navigation Pane box You can use the same process to turn it back on later)
Back in the Violations browser expand the All Violations explorer to reveal the violation categories This shows violations like Datastore Latency Disk Utilization Volume and internal volume IOPS and response times Port Balance violations etc These should look familiar to you from the global policies that we just reviewed a few minutes ago
As you can see you can look at the violations all pile here as you can see over 12000 violations (NOTE Donrsquot let this scare you Usually most violations are caused by events that create multiple violations per event You fix one event and a bunch of these go away) We can see detail on each of these violations by performing the following
15 Insert Technical Report Title Here
Select the show violations impact icon to view all the violations in context by business entity application host virtual machine datacenter etc
Expand the Impacted Business Entity explorer and drill down to Earth Thermal Tracking
Expand and select Disk Utilization
Sort the Description column descending Now you can see Element Description Severity and Violation type
Select the top violation element called Disk DISK-14 of Storage Virtualizer
In the Impact Details microview toggle the Host Virtual Machines Applications and Business Entities and Storage icons to view the details of the impact of the violations Here we see the impact on one application called City Limits owned by one business entity called Green CorpgtAlternate EnergygtGeothermalgtEarth Thermal Tracking on one host However 10 virtual machines are affected by this violation from one array
The chart in the Violation Event microview shows the history and trending of the utilization on this one disk over time From here we can analyze the performance details as wersquoll see later in this demo REVIEW
What are the categories that show impact of violations
What Business Entity is impacted by these violations
What is the Utilization of this disk
Which hosts are being affected by this violation
Which VMs are being affected by the violation
Wersquoll do some troubleshooting using these violations and Analyze Performance later
16 Insert Technical Report Title Here
34 PORT BALANCE VIOLATIONS
Letrsquos take a look at port balance violations These are violations showing imbalance on SAN traffic from hosts Arrays and Switches These are not performance related violations
Using either the Navigation Pane or dropdown menu Open Assurance
Select Port Balance Violations
Group by Type then Device
Expand Hosts and sort the Device column ascending
Select device Host nj_exch002 Note that this host has a balance index of 81 That means the difference in distribution of traffic (or the load) between the HBArsquos on this host Any index over 50 indicates significantly unbalanced ports on a device
Select the Switch Port Performance microview Note that over 88 of the traffic distribution is going across one HBA in only 11 of the traffic going across the other A failure on the heavily used HBA could choke that application This could indicate that port balancing software is not configured on this server not configured at all or not installed
Now collapse the Hosts and expand the Storage devices
Select various storage devices and view the traffic distribution in the switch port performance microview to understand the balance across the storage ports
These port balance violations provide valuable data on how your environment is configure and optimized It allows you to quickly determine where you need to optimize your configurations based on actual usage These are balance violations within each device not necessarily traffic related performance violations Wersquoll look at performance a bit later
17 Insert Technical Report Title Here
35 DISK UTILIZATION VIOLATIONS
We looked Disk Utilization Violations in the Violations Browser a few minutes ago because we got alerted to a violation But if you didnt look at the error you can go directly to disk utilization violations here and troubleshoot your issues the similar to how we did in the Violations Browser The difference is the Violations Browser breaks down the violations by how it impacts your Business Entities Applications Data Centers etc so you can troubleshoot by business priorities while Disk Utilization Violations lets you easily see your most critical utilization issues and troubleshoot from the disk utilization violation back to the hosts You can also add columns to show applications and Business Entities if you want
Letrsquos take a look how you can use Disk Utilization Violations to quickly identify and drilldown to where your issue is
Select Disk Utilization Violations from the Assurance Menu
Sort the Utilization column descending to bring your heaviest utilization to the top
Here you see the utilization of each disk that exceeded the Disk Utilization threshold we
set earlier in our Global Policy In relation to each violation you see the disk array
hosts that access this disk the date and time the violation occurred the percentage of
utilization as well as IOPS and Throughput
Select the disk with the highest utilization
Now select the Volume Usage of Disk microview to get details on volume usage and
performance
Sort the Disk IOPS descending and select the top Volume Usage of Disk Here I see
the volume with the highest usage along with the disk throughput and percentage info by
volume and host
Exceeded Threshold
Host with highest IOPS
Switch Traffic OK
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
4 Insert Technical Report Title Here
THIS IS A LAB GUIDE FOR ONCOMMAND INSIGHT 63 ASSURE PERFORM PLAN AND DATA WAREHOUSE (DWH) Content Note This lab is designed to help you understand the use cases and how to navigate OnCommand Insight server and DWH to demo them I did not call out each use case specifically but provided a smooth flow to learn how to use and demo OnCommand Insight while picking up the use cases along the way Itrsquos also designed so you donrsquot have to show the whole demo but you can pick out particular areas you may want to show your customer This is not all inclusive nor is it designed to show you all features and functions of OnCommand Insight
However please note that each portion of this lab builds on the past steps so it is assumed that you will learn how to navigate as you go through the lab In other words we stop telling you how to get somewhere but tell you to go there because yoursquove stepped through in previous steps
This is primarily built to run against the current OnCommand Insight (Demo_V63gz dated 3212012) and DWH demo db (UPdated_dwh_Demo_630_with_performancegz dated 06012012) Screenshots shown are accurate to those databases at the time of this publishing Your results may vary depending on your database but the functionality is still the same regardless of the data Total lab time is about 1-4 hours depending on experience Itrsquos a large book but lots of pictures Your feedback is welcome My address is at the end of this document Have Fun
1 INTRODUCTION
Using 100 Agentless technologies OnCommand Insight automatically discovers all of your resources and provides you a complete end-to-end view of your entire service path With OnCommand Insight you are able to see exactly which resources are being used and who is using them You can establish policies based on your best practices enabling OnCommand Insight to monitor and alert on violations that fall outside those policies
OnCommand Insight is a ldquoREAD ONLYrdquo tool that inventories and organizes your storage infrastructure to enable you to easily see all the details related each device and how it relates to the other devices in the SAN and the entire environment OnCommand Insight does not actively manage any component
It is also a powerful tool for modeling and validating planned changes to minimize impact and downtime for consolidations and migrations OnCommand Insight can be used to identify candidates for virtualization and tiering
OnCommand Insight correlates discovered resources to business applications enabling you to optimize your resources and better align them with business requirements You can use this valuable information to help you reclaim orphaned storage and re-tier resources to get the most out of your current investments
OnCommand Insight provides trending forecasting and reporting for capacity management
OnCommand Insight enables you to apply your business entities for reporting on usage by business unit application data center and tenants OnCommand Insight provides user accountability and cost awareness enabling you generate automated chargeback reporting by business unit and applications
You can download a full customer facing demo video from
OnCommand 62 Full demo with Table of Contents for easy viewing (For best viewing download and unzip the zip file and play the HTML doc from your local drive)
5 Insert Technical Report Title Here
httpscommunitiesnetappcomdocsDOC-14031
BEGIN LAB
2 INVENTORY AND ASSURANCE NAVIGATION AND VIEWS
Letrsquos take a look at the discovery and inventory of storage switches VM systems etc
Log onto OnCommand Insight by selecting the OnCommand Insight ICON from the desktop
Sign-on using Adminadmin123
From the main screen select the dropdown ICON that looks like a calendar
at the top left corner of the screen and select InventorygtHosts You should see hosts from the demo db
You can use this menu to select Inventory Assurance Performance or Planning categories OR to make it easier to navigate you can activate the Navigation Pane as follows (recommend you use the Navigation Pane as itrsquos easier when you start)
o Select ToolsgtOnCommand Insight Settings from the menu bar
o Check the ldquoUse Navigation Panerdquo box about half way down the right panel of the settings box and select OK
o Notice the full menu bar down the left side of the screen This will make it easier for you to navigate
6 Insert Technical Report Title Here
21 INVENTORY
Expand the Inventory menu by clicking on the Inventory Button on the Navigation Pane as shown below
STORAGE ARRAYS
o Select Storage Arrays from that menu This opens up the MAIN VIEW of storage
o Select the NetApp array called Chicago o As you see Insight provides a full inventory including family model number and
serial number and other elements USE the scroll bar to see more columns to the right
o Note the Icons across the bottom of the screen These are Microviews You can toggle them on and off to select more detail about what you have selected in the main view
o Cycle through the micro views to get an idea of what is there o Select Internal Volume and Volume Microviews (as shown above) o You can select the Column Management ICON to add and subtract more
columns from the view Question Which microview can you add the DeDupe savings column
GROUPING
You can group the information in any of the table views in main and micro views by selecting the dropdown in the upper right corner of the table view and selecting a grouping
o Group the storage devices by model number This enables you to see all of
7 Insert Technical Report Title Here
your arrays grouped by model number o Re-group the main view to NO Grouping
Select the SEARCH icon from the top of the Main View In the search window below the main table type NTAP and see the array selected You can step through looking for next or last occurrence of the word NTAP Close the search window by clicking the X in the red box on the left
SWITCHES
Using the same navigation select switches from the Inventory menu (no pictures here) View the switches and get the names IP addresses model numbers and firmware in the main view View the ports and zone members and all other elements using the micro views You can use the Inventory menu to provide you with views of hosts storage switches VMs etc Letrsquos look closer at paths
Search
8 Insert Technical Report Title Here
PATHS
OnCommand Insight then correlates the paths The paths are the logical and physical elements that make up the service path between the application VM and or host and its storage Paths include Fibre Channel Cloud iSCSI NFS Block NAS etc Now letrsquos set up the views shown here and detailed steps below
Select Paths from the Inventory menu
Group by Host then Storage As you can see here from the fiber Channel prospective when we select a path were looking at the exact paths between the host and it storage through the redundant fibre connections
Select Topology micro view icon from the bottom of the Main view
Use Filter to find host kc_ordev7 by mousing over the top of the Host column until you see a funnel Then type kc in the Host column top cell
Expand kc_Ordev7 and the array ports in the main view
Select the RED Violation port of kc_ordev7
Select Zoning Masking Port micro views
In this particular case you can see that you have one green path which is good across the fiber channel but the other path is blue See legend on right of topology WHAT IS NOT CONFIGURED (Hint Blue and yellow make green)
ANALYZE PATH VIOLATION
OK letrsquos analyze the violation
Topology microview
Good Path
Bad Path
Group by
9 Insert Technical Report Title Here
There are a couple ways you can analyze the violations You can select violation micro view from bottom icons or you can simply right click and select ldquoAnalyze Violationrdquo from the Red Path for kc_ordev7 in the main view This opens a root cause analysis screen below
o Expand the violation and change information in the lower pane What is the cause of this violation
The violation tells you that the number ports this change from 2 to 1 which went against the Redundancy Policy and cause violation If you look down at the last changes that occurred you can see that masking was removed which denied access from that host through its HBA
Right Click and select gtgtgtgtgt
10 Insert Technical Report Title Here
to the FA port on the array and all the way to the volume To fix the violation the administrator needs to reverse those changes and that violation automatically resolves itself
Close the Analyze Violation screen
VIRTUAL MACHINES AND DATA STORE
OnCommand Insight gives you a complete inventory of the VM environment This is discovered through a Data source configured to talk with Virtual Center Details include all internal configurations and elements of the VMs ESX hosts the technologies as well as all the information to correlate the path from VMs to its storage and details about how that storage is configured to the disk including performance (if you have OnCommand Insight Perform License installed) Wersquoll discuss performance later in this demo Select Datastore from the Inventory menu
Scroll down the main view and select DS-30 in the Main view
Display the Topology and Virtual Machine micro views to show which VMs are using Datastore DS-30
Toggle through the micro view icons below to show the details of VM VMDK capacities the storage the backend storage in the arrays and resources You can see full end to end visibility from the Host through a virtualized storage environment to the backend storage
Note You can also select Hosts and Virtual Machines from the Inventory Menu and cycle through the microviews noting the end to end visibility
Donrsquot forget to use the Microview ICONS
11 Insert Technical Report Title Here
3 ASSURANCE
31 APPLYING POLICIES TO MONITOR CONFIGURATION AND PERFORMANCE
Now that weve gathered the inventory and pull all this information into the Database lets start to apply policies so we can monitor and get alerts on violations Wersquoll talk about setting global policies changes and the new violation browser Wersquoll show how we can analyze performance talk about some port balance violations and disk utilization violations Initially we set global policies within OnCommand Insight so we can monitor the environment alert us when something falls outside those policies There are several policies available
Select Policy from the top menu bar
Select Global Polices
What thresholds can you set from here
Select Violation Severity from the left menu of the global policy window
12 Insert Technical Report Title Here
What Severities can you set for each threshold
Select Violation Notification What are the possible violation notification options
32 FIBRE CHANNEL POLICY SETTINGS
We set fibre channel policies to determine and keep track of path redundancy We can set options such as no SPF (single point of failure) or redundant We can set it for minimum number of ports on the host the storage and the maximum number switch hops We can set exceptions around different volume types that would not necessarily require redundancy like BCVs R1 and R2 We can also set policy exceptions around smaller volumes it wouldnt have redundancy like EMC gatekeepers
From the Policy menu on the menu bar Select Fibre Channel Policy
13 Insert Technical Report Title Here
What type of redundancy can you set from here
What is the default number of ports Volume Type Exceptions
What Volume exemptions can you select
You can set redundancy policies on physical storage behind a virtualizer (Backend Path)
14 Insert Technical Report Title Here
33 VIOLATIONS BROWSER
Letrsquos take a look at the violation browser The Violations Browser allows us to see the impact of the violations on your business elements in one place and helps us manage the violations on all of those global policies you saw above
From the Assurance Menu on the left select Violations Browser (Note At this point you might want to increase your viewing real estate here by closing down the navigation pane on the left To close the navigation pane go to ToolsgtOnCommand Insight Setting and uncheck the Navigation Pane box You can use the same process to turn it back on later)
Back in the Violations browser expand the All Violations explorer to reveal the violation categories This shows violations like Datastore Latency Disk Utilization Volume and internal volume IOPS and response times Port Balance violations etc These should look familiar to you from the global policies that we just reviewed a few minutes ago
As you can see you can look at the violations all pile here as you can see over 12000 violations (NOTE Donrsquot let this scare you Usually most violations are caused by events that create multiple violations per event You fix one event and a bunch of these go away) We can see detail on each of these violations by performing the following
15 Insert Technical Report Title Here
Select the show violations impact icon to view all the violations in context by business entity application host virtual machine datacenter etc
Expand the Impacted Business Entity explorer and drill down to Earth Thermal Tracking
Expand and select Disk Utilization
Sort the Description column descending Now you can see Element Description Severity and Violation type
Select the top violation element called Disk DISK-14 of Storage Virtualizer
In the Impact Details microview toggle the Host Virtual Machines Applications and Business Entities and Storage icons to view the details of the impact of the violations Here we see the impact on one application called City Limits owned by one business entity called Green CorpgtAlternate EnergygtGeothermalgtEarth Thermal Tracking on one host However 10 virtual machines are affected by this violation from one array
The chart in the Violation Event microview shows the history and trending of the utilization on this one disk over time From here we can analyze the performance details as wersquoll see later in this demo REVIEW
What are the categories that show impact of violations
What Business Entity is impacted by these violations
What is the Utilization of this disk
Which hosts are being affected by this violation
Which VMs are being affected by the violation
Wersquoll do some troubleshooting using these violations and Analyze Performance later
16 Insert Technical Report Title Here
34 PORT BALANCE VIOLATIONS
Letrsquos take a look at port balance violations These are violations showing imbalance on SAN traffic from hosts Arrays and Switches These are not performance related violations
Using either the Navigation Pane or dropdown menu Open Assurance
Select Port Balance Violations
Group by Type then Device
Expand Hosts and sort the Device column ascending
Select device Host nj_exch002 Note that this host has a balance index of 81 That means the difference in distribution of traffic (or the load) between the HBArsquos on this host Any index over 50 indicates significantly unbalanced ports on a device
Select the Switch Port Performance microview Note that over 88 of the traffic distribution is going across one HBA in only 11 of the traffic going across the other A failure on the heavily used HBA could choke that application This could indicate that port balancing software is not configured on this server not configured at all or not installed
Now collapse the Hosts and expand the Storage devices
Select various storage devices and view the traffic distribution in the switch port performance microview to understand the balance across the storage ports
These port balance violations provide valuable data on how your environment is configure and optimized It allows you to quickly determine where you need to optimize your configurations based on actual usage These are balance violations within each device not necessarily traffic related performance violations Wersquoll look at performance a bit later
17 Insert Technical Report Title Here
35 DISK UTILIZATION VIOLATIONS
We looked Disk Utilization Violations in the Violations Browser a few minutes ago because we got alerted to a violation But if you didnt look at the error you can go directly to disk utilization violations here and troubleshoot your issues the similar to how we did in the Violations Browser The difference is the Violations Browser breaks down the violations by how it impacts your Business Entities Applications Data Centers etc so you can troubleshoot by business priorities while Disk Utilization Violations lets you easily see your most critical utilization issues and troubleshoot from the disk utilization violation back to the hosts You can also add columns to show applications and Business Entities if you want
Letrsquos take a look how you can use Disk Utilization Violations to quickly identify and drilldown to where your issue is
Select Disk Utilization Violations from the Assurance Menu
Sort the Utilization column descending to bring your heaviest utilization to the top
Here you see the utilization of each disk that exceeded the Disk Utilization threshold we
set earlier in our Global Policy In relation to each violation you see the disk array
hosts that access this disk the date and time the violation occurred the percentage of
utilization as well as IOPS and Throughput
Select the disk with the highest utilization
Now select the Volume Usage of Disk microview to get details on volume usage and
performance
Sort the Disk IOPS descending and select the top Volume Usage of Disk Here I see
the volume with the highest usage along with the disk throughput and percentage info by
volume and host
Exceeded Threshold
Host with highest IOPS
Switch Traffic OK
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
5 Insert Technical Report Title Here
httpscommunitiesnetappcomdocsDOC-14031
BEGIN LAB
2 INVENTORY AND ASSURANCE NAVIGATION AND VIEWS
Letrsquos take a look at the discovery and inventory of storage switches VM systems etc
Log onto OnCommand Insight by selecting the OnCommand Insight ICON from the desktop
Sign-on using Adminadmin123
From the main screen select the dropdown ICON that looks like a calendar
at the top left corner of the screen and select InventorygtHosts You should see hosts from the demo db
You can use this menu to select Inventory Assurance Performance or Planning categories OR to make it easier to navigate you can activate the Navigation Pane as follows (recommend you use the Navigation Pane as itrsquos easier when you start)
o Select ToolsgtOnCommand Insight Settings from the menu bar
o Check the ldquoUse Navigation Panerdquo box about half way down the right panel of the settings box and select OK
o Notice the full menu bar down the left side of the screen This will make it easier for you to navigate
6 Insert Technical Report Title Here
21 INVENTORY
Expand the Inventory menu by clicking on the Inventory Button on the Navigation Pane as shown below
STORAGE ARRAYS
o Select Storage Arrays from that menu This opens up the MAIN VIEW of storage
o Select the NetApp array called Chicago o As you see Insight provides a full inventory including family model number and
serial number and other elements USE the scroll bar to see more columns to the right
o Note the Icons across the bottom of the screen These are Microviews You can toggle them on and off to select more detail about what you have selected in the main view
o Cycle through the micro views to get an idea of what is there o Select Internal Volume and Volume Microviews (as shown above) o You can select the Column Management ICON to add and subtract more
columns from the view Question Which microview can you add the DeDupe savings column
GROUPING
You can group the information in any of the table views in main and micro views by selecting the dropdown in the upper right corner of the table view and selecting a grouping
o Group the storage devices by model number This enables you to see all of
7 Insert Technical Report Title Here
your arrays grouped by model number o Re-group the main view to NO Grouping
Select the SEARCH icon from the top of the Main View In the search window below the main table type NTAP and see the array selected You can step through looking for next or last occurrence of the word NTAP Close the search window by clicking the X in the red box on the left
SWITCHES
Using the same navigation select switches from the Inventory menu (no pictures here) View the switches and get the names IP addresses model numbers and firmware in the main view View the ports and zone members and all other elements using the micro views You can use the Inventory menu to provide you with views of hosts storage switches VMs etc Letrsquos look closer at paths
Search
8 Insert Technical Report Title Here
PATHS
OnCommand Insight then correlates the paths The paths are the logical and physical elements that make up the service path between the application VM and or host and its storage Paths include Fibre Channel Cloud iSCSI NFS Block NAS etc Now letrsquos set up the views shown here and detailed steps below
Select Paths from the Inventory menu
Group by Host then Storage As you can see here from the fiber Channel prospective when we select a path were looking at the exact paths between the host and it storage through the redundant fibre connections
Select Topology micro view icon from the bottom of the Main view
Use Filter to find host kc_ordev7 by mousing over the top of the Host column until you see a funnel Then type kc in the Host column top cell
Expand kc_Ordev7 and the array ports in the main view
Select the RED Violation port of kc_ordev7
Select Zoning Masking Port micro views
In this particular case you can see that you have one green path which is good across the fiber channel but the other path is blue See legend on right of topology WHAT IS NOT CONFIGURED (Hint Blue and yellow make green)
ANALYZE PATH VIOLATION
OK letrsquos analyze the violation
Topology microview
Good Path
Bad Path
Group by
9 Insert Technical Report Title Here
There are a couple ways you can analyze the violations You can select violation micro view from bottom icons or you can simply right click and select ldquoAnalyze Violationrdquo from the Red Path for kc_ordev7 in the main view This opens a root cause analysis screen below
o Expand the violation and change information in the lower pane What is the cause of this violation
The violation tells you that the number ports this change from 2 to 1 which went against the Redundancy Policy and cause violation If you look down at the last changes that occurred you can see that masking was removed which denied access from that host through its HBA
Right Click and select gtgtgtgtgt
10 Insert Technical Report Title Here
to the FA port on the array and all the way to the volume To fix the violation the administrator needs to reverse those changes and that violation automatically resolves itself
Close the Analyze Violation screen
VIRTUAL MACHINES AND DATA STORE
OnCommand Insight gives you a complete inventory of the VM environment This is discovered through a Data source configured to talk with Virtual Center Details include all internal configurations and elements of the VMs ESX hosts the technologies as well as all the information to correlate the path from VMs to its storage and details about how that storage is configured to the disk including performance (if you have OnCommand Insight Perform License installed) Wersquoll discuss performance later in this demo Select Datastore from the Inventory menu
Scroll down the main view and select DS-30 in the Main view
Display the Topology and Virtual Machine micro views to show which VMs are using Datastore DS-30
Toggle through the micro view icons below to show the details of VM VMDK capacities the storage the backend storage in the arrays and resources You can see full end to end visibility from the Host through a virtualized storage environment to the backend storage
Note You can also select Hosts and Virtual Machines from the Inventory Menu and cycle through the microviews noting the end to end visibility
Donrsquot forget to use the Microview ICONS
11 Insert Technical Report Title Here
3 ASSURANCE
31 APPLYING POLICIES TO MONITOR CONFIGURATION AND PERFORMANCE
Now that weve gathered the inventory and pull all this information into the Database lets start to apply policies so we can monitor and get alerts on violations Wersquoll talk about setting global policies changes and the new violation browser Wersquoll show how we can analyze performance talk about some port balance violations and disk utilization violations Initially we set global policies within OnCommand Insight so we can monitor the environment alert us when something falls outside those policies There are several policies available
Select Policy from the top menu bar
Select Global Polices
What thresholds can you set from here
Select Violation Severity from the left menu of the global policy window
12 Insert Technical Report Title Here
What Severities can you set for each threshold
Select Violation Notification What are the possible violation notification options
32 FIBRE CHANNEL POLICY SETTINGS
We set fibre channel policies to determine and keep track of path redundancy We can set options such as no SPF (single point of failure) or redundant We can set it for minimum number of ports on the host the storage and the maximum number switch hops We can set exceptions around different volume types that would not necessarily require redundancy like BCVs R1 and R2 We can also set policy exceptions around smaller volumes it wouldnt have redundancy like EMC gatekeepers
From the Policy menu on the menu bar Select Fibre Channel Policy
13 Insert Technical Report Title Here
What type of redundancy can you set from here
What is the default number of ports Volume Type Exceptions
What Volume exemptions can you select
You can set redundancy policies on physical storage behind a virtualizer (Backend Path)
14 Insert Technical Report Title Here
33 VIOLATIONS BROWSER
Letrsquos take a look at the violation browser The Violations Browser allows us to see the impact of the violations on your business elements in one place and helps us manage the violations on all of those global policies you saw above
From the Assurance Menu on the left select Violations Browser (Note At this point you might want to increase your viewing real estate here by closing down the navigation pane on the left To close the navigation pane go to ToolsgtOnCommand Insight Setting and uncheck the Navigation Pane box You can use the same process to turn it back on later)
Back in the Violations browser expand the All Violations explorer to reveal the violation categories This shows violations like Datastore Latency Disk Utilization Volume and internal volume IOPS and response times Port Balance violations etc These should look familiar to you from the global policies that we just reviewed a few minutes ago
As you can see you can look at the violations all pile here as you can see over 12000 violations (NOTE Donrsquot let this scare you Usually most violations are caused by events that create multiple violations per event You fix one event and a bunch of these go away) We can see detail on each of these violations by performing the following
15 Insert Technical Report Title Here
Select the show violations impact icon to view all the violations in context by business entity application host virtual machine datacenter etc
Expand the Impacted Business Entity explorer and drill down to Earth Thermal Tracking
Expand and select Disk Utilization
Sort the Description column descending Now you can see Element Description Severity and Violation type
Select the top violation element called Disk DISK-14 of Storage Virtualizer
In the Impact Details microview toggle the Host Virtual Machines Applications and Business Entities and Storage icons to view the details of the impact of the violations Here we see the impact on one application called City Limits owned by one business entity called Green CorpgtAlternate EnergygtGeothermalgtEarth Thermal Tracking on one host However 10 virtual machines are affected by this violation from one array
The chart in the Violation Event microview shows the history and trending of the utilization on this one disk over time From here we can analyze the performance details as wersquoll see later in this demo REVIEW
What are the categories that show impact of violations
What Business Entity is impacted by these violations
What is the Utilization of this disk
Which hosts are being affected by this violation
Which VMs are being affected by the violation
Wersquoll do some troubleshooting using these violations and Analyze Performance later
16 Insert Technical Report Title Here
34 PORT BALANCE VIOLATIONS
Letrsquos take a look at port balance violations These are violations showing imbalance on SAN traffic from hosts Arrays and Switches These are not performance related violations
Using either the Navigation Pane or dropdown menu Open Assurance
Select Port Balance Violations
Group by Type then Device
Expand Hosts and sort the Device column ascending
Select device Host nj_exch002 Note that this host has a balance index of 81 That means the difference in distribution of traffic (or the load) between the HBArsquos on this host Any index over 50 indicates significantly unbalanced ports on a device
Select the Switch Port Performance microview Note that over 88 of the traffic distribution is going across one HBA in only 11 of the traffic going across the other A failure on the heavily used HBA could choke that application This could indicate that port balancing software is not configured on this server not configured at all or not installed
Now collapse the Hosts and expand the Storage devices
Select various storage devices and view the traffic distribution in the switch port performance microview to understand the balance across the storage ports
These port balance violations provide valuable data on how your environment is configure and optimized It allows you to quickly determine where you need to optimize your configurations based on actual usage These are balance violations within each device not necessarily traffic related performance violations Wersquoll look at performance a bit later
17 Insert Technical Report Title Here
35 DISK UTILIZATION VIOLATIONS
We looked Disk Utilization Violations in the Violations Browser a few minutes ago because we got alerted to a violation But if you didnt look at the error you can go directly to disk utilization violations here and troubleshoot your issues the similar to how we did in the Violations Browser The difference is the Violations Browser breaks down the violations by how it impacts your Business Entities Applications Data Centers etc so you can troubleshoot by business priorities while Disk Utilization Violations lets you easily see your most critical utilization issues and troubleshoot from the disk utilization violation back to the hosts You can also add columns to show applications and Business Entities if you want
Letrsquos take a look how you can use Disk Utilization Violations to quickly identify and drilldown to where your issue is
Select Disk Utilization Violations from the Assurance Menu
Sort the Utilization column descending to bring your heaviest utilization to the top
Here you see the utilization of each disk that exceeded the Disk Utilization threshold we
set earlier in our Global Policy In relation to each violation you see the disk array
hosts that access this disk the date and time the violation occurred the percentage of
utilization as well as IOPS and Throughput
Select the disk with the highest utilization
Now select the Volume Usage of Disk microview to get details on volume usage and
performance
Sort the Disk IOPS descending and select the top Volume Usage of Disk Here I see
the volume with the highest usage along with the disk throughput and percentage info by
volume and host
Exceeded Threshold
Host with highest IOPS
Switch Traffic OK
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
6 Insert Technical Report Title Here
21 INVENTORY
Expand the Inventory menu by clicking on the Inventory Button on the Navigation Pane as shown below
STORAGE ARRAYS
o Select Storage Arrays from that menu This opens up the MAIN VIEW of storage
o Select the NetApp array called Chicago o As you see Insight provides a full inventory including family model number and
serial number and other elements USE the scroll bar to see more columns to the right
o Note the Icons across the bottom of the screen These are Microviews You can toggle them on and off to select more detail about what you have selected in the main view
o Cycle through the micro views to get an idea of what is there o Select Internal Volume and Volume Microviews (as shown above) o You can select the Column Management ICON to add and subtract more
columns from the view Question Which microview can you add the DeDupe savings column
GROUPING
You can group the information in any of the table views in main and micro views by selecting the dropdown in the upper right corner of the table view and selecting a grouping
o Group the storage devices by model number This enables you to see all of
7 Insert Technical Report Title Here
your arrays grouped by model number o Re-group the main view to NO Grouping
Select the SEARCH icon from the top of the Main View In the search window below the main table type NTAP and see the array selected You can step through looking for next or last occurrence of the word NTAP Close the search window by clicking the X in the red box on the left
SWITCHES
Using the same navigation select switches from the Inventory menu (no pictures here) View the switches and get the names IP addresses model numbers and firmware in the main view View the ports and zone members and all other elements using the micro views You can use the Inventory menu to provide you with views of hosts storage switches VMs etc Letrsquos look closer at paths
Search
8 Insert Technical Report Title Here
PATHS
OnCommand Insight then correlates the paths The paths are the logical and physical elements that make up the service path between the application VM and or host and its storage Paths include Fibre Channel Cloud iSCSI NFS Block NAS etc Now letrsquos set up the views shown here and detailed steps below
Select Paths from the Inventory menu
Group by Host then Storage As you can see here from the fiber Channel prospective when we select a path were looking at the exact paths between the host and it storage through the redundant fibre connections
Select Topology micro view icon from the bottom of the Main view
Use Filter to find host kc_ordev7 by mousing over the top of the Host column until you see a funnel Then type kc in the Host column top cell
Expand kc_Ordev7 and the array ports in the main view
Select the RED Violation port of kc_ordev7
Select Zoning Masking Port micro views
In this particular case you can see that you have one green path which is good across the fiber channel but the other path is blue See legend on right of topology WHAT IS NOT CONFIGURED (Hint Blue and yellow make green)
ANALYZE PATH VIOLATION
OK letrsquos analyze the violation
Topology microview
Good Path
Bad Path
Group by
9 Insert Technical Report Title Here
There are a couple ways you can analyze the violations You can select violation micro view from bottom icons or you can simply right click and select ldquoAnalyze Violationrdquo from the Red Path for kc_ordev7 in the main view This opens a root cause analysis screen below
o Expand the violation and change information in the lower pane What is the cause of this violation
The violation tells you that the number ports this change from 2 to 1 which went against the Redundancy Policy and cause violation If you look down at the last changes that occurred you can see that masking was removed which denied access from that host through its HBA
Right Click and select gtgtgtgtgt
10 Insert Technical Report Title Here
to the FA port on the array and all the way to the volume To fix the violation the administrator needs to reverse those changes and that violation automatically resolves itself
Close the Analyze Violation screen
VIRTUAL MACHINES AND DATA STORE
OnCommand Insight gives you a complete inventory of the VM environment This is discovered through a Data source configured to talk with Virtual Center Details include all internal configurations and elements of the VMs ESX hosts the technologies as well as all the information to correlate the path from VMs to its storage and details about how that storage is configured to the disk including performance (if you have OnCommand Insight Perform License installed) Wersquoll discuss performance later in this demo Select Datastore from the Inventory menu
Scroll down the main view and select DS-30 in the Main view
Display the Topology and Virtual Machine micro views to show which VMs are using Datastore DS-30
Toggle through the micro view icons below to show the details of VM VMDK capacities the storage the backend storage in the arrays and resources You can see full end to end visibility from the Host through a virtualized storage environment to the backend storage
Note You can also select Hosts and Virtual Machines from the Inventory Menu and cycle through the microviews noting the end to end visibility
Donrsquot forget to use the Microview ICONS
11 Insert Technical Report Title Here
3 ASSURANCE
31 APPLYING POLICIES TO MONITOR CONFIGURATION AND PERFORMANCE
Now that weve gathered the inventory and pull all this information into the Database lets start to apply policies so we can monitor and get alerts on violations Wersquoll talk about setting global policies changes and the new violation browser Wersquoll show how we can analyze performance talk about some port balance violations and disk utilization violations Initially we set global policies within OnCommand Insight so we can monitor the environment alert us when something falls outside those policies There are several policies available
Select Policy from the top menu bar
Select Global Polices
What thresholds can you set from here
Select Violation Severity from the left menu of the global policy window
12 Insert Technical Report Title Here
What Severities can you set for each threshold
Select Violation Notification What are the possible violation notification options
32 FIBRE CHANNEL POLICY SETTINGS
We set fibre channel policies to determine and keep track of path redundancy We can set options such as no SPF (single point of failure) or redundant We can set it for minimum number of ports on the host the storage and the maximum number switch hops We can set exceptions around different volume types that would not necessarily require redundancy like BCVs R1 and R2 We can also set policy exceptions around smaller volumes it wouldnt have redundancy like EMC gatekeepers
From the Policy menu on the menu bar Select Fibre Channel Policy
13 Insert Technical Report Title Here
What type of redundancy can you set from here
What is the default number of ports Volume Type Exceptions
What Volume exemptions can you select
You can set redundancy policies on physical storage behind a virtualizer (Backend Path)
14 Insert Technical Report Title Here
33 VIOLATIONS BROWSER
Letrsquos take a look at the violation browser The Violations Browser allows us to see the impact of the violations on your business elements in one place and helps us manage the violations on all of those global policies you saw above
From the Assurance Menu on the left select Violations Browser (Note At this point you might want to increase your viewing real estate here by closing down the navigation pane on the left To close the navigation pane go to ToolsgtOnCommand Insight Setting and uncheck the Navigation Pane box You can use the same process to turn it back on later)
Back in the Violations browser expand the All Violations explorer to reveal the violation categories This shows violations like Datastore Latency Disk Utilization Volume and internal volume IOPS and response times Port Balance violations etc These should look familiar to you from the global policies that we just reviewed a few minutes ago
As you can see you can look at the violations all pile here as you can see over 12000 violations (NOTE Donrsquot let this scare you Usually most violations are caused by events that create multiple violations per event You fix one event and a bunch of these go away) We can see detail on each of these violations by performing the following
15 Insert Technical Report Title Here
Select the show violations impact icon to view all the violations in context by business entity application host virtual machine datacenter etc
Expand the Impacted Business Entity explorer and drill down to Earth Thermal Tracking
Expand and select Disk Utilization
Sort the Description column descending Now you can see Element Description Severity and Violation type
Select the top violation element called Disk DISK-14 of Storage Virtualizer
In the Impact Details microview toggle the Host Virtual Machines Applications and Business Entities and Storage icons to view the details of the impact of the violations Here we see the impact on one application called City Limits owned by one business entity called Green CorpgtAlternate EnergygtGeothermalgtEarth Thermal Tracking on one host However 10 virtual machines are affected by this violation from one array
The chart in the Violation Event microview shows the history and trending of the utilization on this one disk over time From here we can analyze the performance details as wersquoll see later in this demo REVIEW
What are the categories that show impact of violations
What Business Entity is impacted by these violations
What is the Utilization of this disk
Which hosts are being affected by this violation
Which VMs are being affected by the violation
Wersquoll do some troubleshooting using these violations and Analyze Performance later
16 Insert Technical Report Title Here
34 PORT BALANCE VIOLATIONS
Letrsquos take a look at port balance violations These are violations showing imbalance on SAN traffic from hosts Arrays and Switches These are not performance related violations
Using either the Navigation Pane or dropdown menu Open Assurance
Select Port Balance Violations
Group by Type then Device
Expand Hosts and sort the Device column ascending
Select device Host nj_exch002 Note that this host has a balance index of 81 That means the difference in distribution of traffic (or the load) between the HBArsquos on this host Any index over 50 indicates significantly unbalanced ports on a device
Select the Switch Port Performance microview Note that over 88 of the traffic distribution is going across one HBA in only 11 of the traffic going across the other A failure on the heavily used HBA could choke that application This could indicate that port balancing software is not configured on this server not configured at all or not installed
Now collapse the Hosts and expand the Storage devices
Select various storage devices and view the traffic distribution in the switch port performance microview to understand the balance across the storage ports
These port balance violations provide valuable data on how your environment is configure and optimized It allows you to quickly determine where you need to optimize your configurations based on actual usage These are balance violations within each device not necessarily traffic related performance violations Wersquoll look at performance a bit later
17 Insert Technical Report Title Here
35 DISK UTILIZATION VIOLATIONS
We looked Disk Utilization Violations in the Violations Browser a few minutes ago because we got alerted to a violation But if you didnt look at the error you can go directly to disk utilization violations here and troubleshoot your issues the similar to how we did in the Violations Browser The difference is the Violations Browser breaks down the violations by how it impacts your Business Entities Applications Data Centers etc so you can troubleshoot by business priorities while Disk Utilization Violations lets you easily see your most critical utilization issues and troubleshoot from the disk utilization violation back to the hosts You can also add columns to show applications and Business Entities if you want
Letrsquos take a look how you can use Disk Utilization Violations to quickly identify and drilldown to where your issue is
Select Disk Utilization Violations from the Assurance Menu
Sort the Utilization column descending to bring your heaviest utilization to the top
Here you see the utilization of each disk that exceeded the Disk Utilization threshold we
set earlier in our Global Policy In relation to each violation you see the disk array
hosts that access this disk the date and time the violation occurred the percentage of
utilization as well as IOPS and Throughput
Select the disk with the highest utilization
Now select the Volume Usage of Disk microview to get details on volume usage and
performance
Sort the Disk IOPS descending and select the top Volume Usage of Disk Here I see
the volume with the highest usage along with the disk throughput and percentage info by
volume and host
Exceeded Threshold
Host with highest IOPS
Switch Traffic OK
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
7 Insert Technical Report Title Here
your arrays grouped by model number o Re-group the main view to NO Grouping
Select the SEARCH icon from the top of the Main View In the search window below the main table type NTAP and see the array selected You can step through looking for next or last occurrence of the word NTAP Close the search window by clicking the X in the red box on the left
SWITCHES
Using the same navigation select switches from the Inventory menu (no pictures here) View the switches and get the names IP addresses model numbers and firmware in the main view View the ports and zone members and all other elements using the micro views You can use the Inventory menu to provide you with views of hosts storage switches VMs etc Letrsquos look closer at paths
Search
8 Insert Technical Report Title Here
PATHS
OnCommand Insight then correlates the paths The paths are the logical and physical elements that make up the service path between the application VM and or host and its storage Paths include Fibre Channel Cloud iSCSI NFS Block NAS etc Now letrsquos set up the views shown here and detailed steps below
Select Paths from the Inventory menu
Group by Host then Storage As you can see here from the fiber Channel prospective when we select a path were looking at the exact paths between the host and it storage through the redundant fibre connections
Select Topology micro view icon from the bottom of the Main view
Use Filter to find host kc_ordev7 by mousing over the top of the Host column until you see a funnel Then type kc in the Host column top cell
Expand kc_Ordev7 and the array ports in the main view
Select the RED Violation port of kc_ordev7
Select Zoning Masking Port micro views
In this particular case you can see that you have one green path which is good across the fiber channel but the other path is blue See legend on right of topology WHAT IS NOT CONFIGURED (Hint Blue and yellow make green)
ANALYZE PATH VIOLATION
OK letrsquos analyze the violation
Topology microview
Good Path
Bad Path
Group by
9 Insert Technical Report Title Here
There are a couple ways you can analyze the violations You can select violation micro view from bottom icons or you can simply right click and select ldquoAnalyze Violationrdquo from the Red Path for kc_ordev7 in the main view This opens a root cause analysis screen below
o Expand the violation and change information in the lower pane What is the cause of this violation
The violation tells you that the number ports this change from 2 to 1 which went against the Redundancy Policy and cause violation If you look down at the last changes that occurred you can see that masking was removed which denied access from that host through its HBA
Right Click and select gtgtgtgtgt
10 Insert Technical Report Title Here
to the FA port on the array and all the way to the volume To fix the violation the administrator needs to reverse those changes and that violation automatically resolves itself
Close the Analyze Violation screen
VIRTUAL MACHINES AND DATA STORE
OnCommand Insight gives you a complete inventory of the VM environment This is discovered through a Data source configured to talk with Virtual Center Details include all internal configurations and elements of the VMs ESX hosts the technologies as well as all the information to correlate the path from VMs to its storage and details about how that storage is configured to the disk including performance (if you have OnCommand Insight Perform License installed) Wersquoll discuss performance later in this demo Select Datastore from the Inventory menu
Scroll down the main view and select DS-30 in the Main view
Display the Topology and Virtual Machine micro views to show which VMs are using Datastore DS-30
Toggle through the micro view icons below to show the details of VM VMDK capacities the storage the backend storage in the arrays and resources You can see full end to end visibility from the Host through a virtualized storage environment to the backend storage
Note You can also select Hosts and Virtual Machines from the Inventory Menu and cycle through the microviews noting the end to end visibility
Donrsquot forget to use the Microview ICONS
11 Insert Technical Report Title Here
3 ASSURANCE
31 APPLYING POLICIES TO MONITOR CONFIGURATION AND PERFORMANCE
Now that weve gathered the inventory and pull all this information into the Database lets start to apply policies so we can monitor and get alerts on violations Wersquoll talk about setting global policies changes and the new violation browser Wersquoll show how we can analyze performance talk about some port balance violations and disk utilization violations Initially we set global policies within OnCommand Insight so we can monitor the environment alert us when something falls outside those policies There are several policies available
Select Policy from the top menu bar
Select Global Polices
What thresholds can you set from here
Select Violation Severity from the left menu of the global policy window
12 Insert Technical Report Title Here
What Severities can you set for each threshold
Select Violation Notification What are the possible violation notification options
32 FIBRE CHANNEL POLICY SETTINGS
We set fibre channel policies to determine and keep track of path redundancy We can set options such as no SPF (single point of failure) or redundant We can set it for minimum number of ports on the host the storage and the maximum number switch hops We can set exceptions around different volume types that would not necessarily require redundancy like BCVs R1 and R2 We can also set policy exceptions around smaller volumes it wouldnt have redundancy like EMC gatekeepers
From the Policy menu on the menu bar Select Fibre Channel Policy
13 Insert Technical Report Title Here
What type of redundancy can you set from here
What is the default number of ports Volume Type Exceptions
What Volume exemptions can you select
You can set redundancy policies on physical storage behind a virtualizer (Backend Path)
14 Insert Technical Report Title Here
33 VIOLATIONS BROWSER
Letrsquos take a look at the violation browser The Violations Browser allows us to see the impact of the violations on your business elements in one place and helps us manage the violations on all of those global policies you saw above
From the Assurance Menu on the left select Violations Browser (Note At this point you might want to increase your viewing real estate here by closing down the navigation pane on the left To close the navigation pane go to ToolsgtOnCommand Insight Setting and uncheck the Navigation Pane box You can use the same process to turn it back on later)
Back in the Violations browser expand the All Violations explorer to reveal the violation categories This shows violations like Datastore Latency Disk Utilization Volume and internal volume IOPS and response times Port Balance violations etc These should look familiar to you from the global policies that we just reviewed a few minutes ago
As you can see you can look at the violations all pile here as you can see over 12000 violations (NOTE Donrsquot let this scare you Usually most violations are caused by events that create multiple violations per event You fix one event and a bunch of these go away) We can see detail on each of these violations by performing the following
15 Insert Technical Report Title Here
Select the show violations impact icon to view all the violations in context by business entity application host virtual machine datacenter etc
Expand the Impacted Business Entity explorer and drill down to Earth Thermal Tracking
Expand and select Disk Utilization
Sort the Description column descending Now you can see Element Description Severity and Violation type
Select the top violation element called Disk DISK-14 of Storage Virtualizer
In the Impact Details microview toggle the Host Virtual Machines Applications and Business Entities and Storage icons to view the details of the impact of the violations Here we see the impact on one application called City Limits owned by one business entity called Green CorpgtAlternate EnergygtGeothermalgtEarth Thermal Tracking on one host However 10 virtual machines are affected by this violation from one array
The chart in the Violation Event microview shows the history and trending of the utilization on this one disk over time From here we can analyze the performance details as wersquoll see later in this demo REVIEW
What are the categories that show impact of violations
What Business Entity is impacted by these violations
What is the Utilization of this disk
Which hosts are being affected by this violation
Which VMs are being affected by the violation
Wersquoll do some troubleshooting using these violations and Analyze Performance later
16 Insert Technical Report Title Here
34 PORT BALANCE VIOLATIONS
Letrsquos take a look at port balance violations These are violations showing imbalance on SAN traffic from hosts Arrays and Switches These are not performance related violations
Using either the Navigation Pane or dropdown menu Open Assurance
Select Port Balance Violations
Group by Type then Device
Expand Hosts and sort the Device column ascending
Select device Host nj_exch002 Note that this host has a balance index of 81 That means the difference in distribution of traffic (or the load) between the HBArsquos on this host Any index over 50 indicates significantly unbalanced ports on a device
Select the Switch Port Performance microview Note that over 88 of the traffic distribution is going across one HBA in only 11 of the traffic going across the other A failure on the heavily used HBA could choke that application This could indicate that port balancing software is not configured on this server not configured at all or not installed
Now collapse the Hosts and expand the Storage devices
Select various storage devices and view the traffic distribution in the switch port performance microview to understand the balance across the storage ports
These port balance violations provide valuable data on how your environment is configure and optimized It allows you to quickly determine where you need to optimize your configurations based on actual usage These are balance violations within each device not necessarily traffic related performance violations Wersquoll look at performance a bit later
17 Insert Technical Report Title Here
35 DISK UTILIZATION VIOLATIONS
We looked Disk Utilization Violations in the Violations Browser a few minutes ago because we got alerted to a violation But if you didnt look at the error you can go directly to disk utilization violations here and troubleshoot your issues the similar to how we did in the Violations Browser The difference is the Violations Browser breaks down the violations by how it impacts your Business Entities Applications Data Centers etc so you can troubleshoot by business priorities while Disk Utilization Violations lets you easily see your most critical utilization issues and troubleshoot from the disk utilization violation back to the hosts You can also add columns to show applications and Business Entities if you want
Letrsquos take a look how you can use Disk Utilization Violations to quickly identify and drilldown to where your issue is
Select Disk Utilization Violations from the Assurance Menu
Sort the Utilization column descending to bring your heaviest utilization to the top
Here you see the utilization of each disk that exceeded the Disk Utilization threshold we
set earlier in our Global Policy In relation to each violation you see the disk array
hosts that access this disk the date and time the violation occurred the percentage of
utilization as well as IOPS and Throughput
Select the disk with the highest utilization
Now select the Volume Usage of Disk microview to get details on volume usage and
performance
Sort the Disk IOPS descending and select the top Volume Usage of Disk Here I see
the volume with the highest usage along with the disk throughput and percentage info by
volume and host
Exceeded Threshold
Host with highest IOPS
Switch Traffic OK
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
8 Insert Technical Report Title Here
PATHS
OnCommand Insight then correlates the paths The paths are the logical and physical elements that make up the service path between the application VM and or host and its storage Paths include Fibre Channel Cloud iSCSI NFS Block NAS etc Now letrsquos set up the views shown here and detailed steps below
Select Paths from the Inventory menu
Group by Host then Storage As you can see here from the fiber Channel prospective when we select a path were looking at the exact paths between the host and it storage through the redundant fibre connections
Select Topology micro view icon from the bottom of the Main view
Use Filter to find host kc_ordev7 by mousing over the top of the Host column until you see a funnel Then type kc in the Host column top cell
Expand kc_Ordev7 and the array ports in the main view
Select the RED Violation port of kc_ordev7
Select Zoning Masking Port micro views
In this particular case you can see that you have one green path which is good across the fiber channel but the other path is blue See legend on right of topology WHAT IS NOT CONFIGURED (Hint Blue and yellow make green)
ANALYZE PATH VIOLATION
OK letrsquos analyze the violation
Topology microview
Good Path
Bad Path
Group by
9 Insert Technical Report Title Here
There are a couple ways you can analyze the violations You can select violation micro view from bottom icons or you can simply right click and select ldquoAnalyze Violationrdquo from the Red Path for kc_ordev7 in the main view This opens a root cause analysis screen below
o Expand the violation and change information in the lower pane What is the cause of this violation
The violation tells you that the number ports this change from 2 to 1 which went against the Redundancy Policy and cause violation If you look down at the last changes that occurred you can see that masking was removed which denied access from that host through its HBA
Right Click and select gtgtgtgtgt
10 Insert Technical Report Title Here
to the FA port on the array and all the way to the volume To fix the violation the administrator needs to reverse those changes and that violation automatically resolves itself
Close the Analyze Violation screen
VIRTUAL MACHINES AND DATA STORE
OnCommand Insight gives you a complete inventory of the VM environment This is discovered through a Data source configured to talk with Virtual Center Details include all internal configurations and elements of the VMs ESX hosts the technologies as well as all the information to correlate the path from VMs to its storage and details about how that storage is configured to the disk including performance (if you have OnCommand Insight Perform License installed) Wersquoll discuss performance later in this demo Select Datastore from the Inventory menu
Scroll down the main view and select DS-30 in the Main view
Display the Topology and Virtual Machine micro views to show which VMs are using Datastore DS-30
Toggle through the micro view icons below to show the details of VM VMDK capacities the storage the backend storage in the arrays and resources You can see full end to end visibility from the Host through a virtualized storage environment to the backend storage
Note You can also select Hosts and Virtual Machines from the Inventory Menu and cycle through the microviews noting the end to end visibility
Donrsquot forget to use the Microview ICONS
11 Insert Technical Report Title Here
3 ASSURANCE
31 APPLYING POLICIES TO MONITOR CONFIGURATION AND PERFORMANCE
Now that weve gathered the inventory and pull all this information into the Database lets start to apply policies so we can monitor and get alerts on violations Wersquoll talk about setting global policies changes and the new violation browser Wersquoll show how we can analyze performance talk about some port balance violations and disk utilization violations Initially we set global policies within OnCommand Insight so we can monitor the environment alert us when something falls outside those policies There are several policies available
Select Policy from the top menu bar
Select Global Polices
What thresholds can you set from here
Select Violation Severity from the left menu of the global policy window
12 Insert Technical Report Title Here
What Severities can you set for each threshold
Select Violation Notification What are the possible violation notification options
32 FIBRE CHANNEL POLICY SETTINGS
We set fibre channel policies to determine and keep track of path redundancy We can set options such as no SPF (single point of failure) or redundant We can set it for minimum number of ports on the host the storage and the maximum number switch hops We can set exceptions around different volume types that would not necessarily require redundancy like BCVs R1 and R2 We can also set policy exceptions around smaller volumes it wouldnt have redundancy like EMC gatekeepers
From the Policy menu on the menu bar Select Fibre Channel Policy
13 Insert Technical Report Title Here
What type of redundancy can you set from here
What is the default number of ports Volume Type Exceptions
What Volume exemptions can you select
You can set redundancy policies on physical storage behind a virtualizer (Backend Path)
14 Insert Technical Report Title Here
33 VIOLATIONS BROWSER
Letrsquos take a look at the violation browser The Violations Browser allows us to see the impact of the violations on your business elements in one place and helps us manage the violations on all of those global policies you saw above
From the Assurance Menu on the left select Violations Browser (Note At this point you might want to increase your viewing real estate here by closing down the navigation pane on the left To close the navigation pane go to ToolsgtOnCommand Insight Setting and uncheck the Navigation Pane box You can use the same process to turn it back on later)
Back in the Violations browser expand the All Violations explorer to reveal the violation categories This shows violations like Datastore Latency Disk Utilization Volume and internal volume IOPS and response times Port Balance violations etc These should look familiar to you from the global policies that we just reviewed a few minutes ago
As you can see you can look at the violations all pile here as you can see over 12000 violations (NOTE Donrsquot let this scare you Usually most violations are caused by events that create multiple violations per event You fix one event and a bunch of these go away) We can see detail on each of these violations by performing the following
15 Insert Technical Report Title Here
Select the show violations impact icon to view all the violations in context by business entity application host virtual machine datacenter etc
Expand the Impacted Business Entity explorer and drill down to Earth Thermal Tracking
Expand and select Disk Utilization
Sort the Description column descending Now you can see Element Description Severity and Violation type
Select the top violation element called Disk DISK-14 of Storage Virtualizer
In the Impact Details microview toggle the Host Virtual Machines Applications and Business Entities and Storage icons to view the details of the impact of the violations Here we see the impact on one application called City Limits owned by one business entity called Green CorpgtAlternate EnergygtGeothermalgtEarth Thermal Tracking on one host However 10 virtual machines are affected by this violation from one array
The chart in the Violation Event microview shows the history and trending of the utilization on this one disk over time From here we can analyze the performance details as wersquoll see later in this demo REVIEW
What are the categories that show impact of violations
What Business Entity is impacted by these violations
What is the Utilization of this disk
Which hosts are being affected by this violation
Which VMs are being affected by the violation
Wersquoll do some troubleshooting using these violations and Analyze Performance later
16 Insert Technical Report Title Here
34 PORT BALANCE VIOLATIONS
Letrsquos take a look at port balance violations These are violations showing imbalance on SAN traffic from hosts Arrays and Switches These are not performance related violations
Using either the Navigation Pane or dropdown menu Open Assurance
Select Port Balance Violations
Group by Type then Device
Expand Hosts and sort the Device column ascending
Select device Host nj_exch002 Note that this host has a balance index of 81 That means the difference in distribution of traffic (or the load) between the HBArsquos on this host Any index over 50 indicates significantly unbalanced ports on a device
Select the Switch Port Performance microview Note that over 88 of the traffic distribution is going across one HBA in only 11 of the traffic going across the other A failure on the heavily used HBA could choke that application This could indicate that port balancing software is not configured on this server not configured at all or not installed
Now collapse the Hosts and expand the Storage devices
Select various storage devices and view the traffic distribution in the switch port performance microview to understand the balance across the storage ports
These port balance violations provide valuable data on how your environment is configure and optimized It allows you to quickly determine where you need to optimize your configurations based on actual usage These are balance violations within each device not necessarily traffic related performance violations Wersquoll look at performance a bit later
17 Insert Technical Report Title Here
35 DISK UTILIZATION VIOLATIONS
We looked Disk Utilization Violations in the Violations Browser a few minutes ago because we got alerted to a violation But if you didnt look at the error you can go directly to disk utilization violations here and troubleshoot your issues the similar to how we did in the Violations Browser The difference is the Violations Browser breaks down the violations by how it impacts your Business Entities Applications Data Centers etc so you can troubleshoot by business priorities while Disk Utilization Violations lets you easily see your most critical utilization issues and troubleshoot from the disk utilization violation back to the hosts You can also add columns to show applications and Business Entities if you want
Letrsquos take a look how you can use Disk Utilization Violations to quickly identify and drilldown to where your issue is
Select Disk Utilization Violations from the Assurance Menu
Sort the Utilization column descending to bring your heaviest utilization to the top
Here you see the utilization of each disk that exceeded the Disk Utilization threshold we
set earlier in our Global Policy In relation to each violation you see the disk array
hosts that access this disk the date and time the violation occurred the percentage of
utilization as well as IOPS and Throughput
Select the disk with the highest utilization
Now select the Volume Usage of Disk microview to get details on volume usage and
performance
Sort the Disk IOPS descending and select the top Volume Usage of Disk Here I see
the volume with the highest usage along with the disk throughput and percentage info by
volume and host
Exceeded Threshold
Host with highest IOPS
Switch Traffic OK
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
9 Insert Technical Report Title Here
There are a couple ways you can analyze the violations You can select violation micro view from bottom icons or you can simply right click and select ldquoAnalyze Violationrdquo from the Red Path for kc_ordev7 in the main view This opens a root cause analysis screen below
o Expand the violation and change information in the lower pane What is the cause of this violation
The violation tells you that the number ports this change from 2 to 1 which went against the Redundancy Policy and cause violation If you look down at the last changes that occurred you can see that masking was removed which denied access from that host through its HBA
Right Click and select gtgtgtgtgt
10 Insert Technical Report Title Here
to the FA port on the array and all the way to the volume To fix the violation the administrator needs to reverse those changes and that violation automatically resolves itself
Close the Analyze Violation screen
VIRTUAL MACHINES AND DATA STORE
OnCommand Insight gives you a complete inventory of the VM environment This is discovered through a Data source configured to talk with Virtual Center Details include all internal configurations and elements of the VMs ESX hosts the technologies as well as all the information to correlate the path from VMs to its storage and details about how that storage is configured to the disk including performance (if you have OnCommand Insight Perform License installed) Wersquoll discuss performance later in this demo Select Datastore from the Inventory menu
Scroll down the main view and select DS-30 in the Main view
Display the Topology and Virtual Machine micro views to show which VMs are using Datastore DS-30
Toggle through the micro view icons below to show the details of VM VMDK capacities the storage the backend storage in the arrays and resources You can see full end to end visibility from the Host through a virtualized storage environment to the backend storage
Note You can also select Hosts and Virtual Machines from the Inventory Menu and cycle through the microviews noting the end to end visibility
Donrsquot forget to use the Microview ICONS
11 Insert Technical Report Title Here
3 ASSURANCE
31 APPLYING POLICIES TO MONITOR CONFIGURATION AND PERFORMANCE
Now that weve gathered the inventory and pull all this information into the Database lets start to apply policies so we can monitor and get alerts on violations Wersquoll talk about setting global policies changes and the new violation browser Wersquoll show how we can analyze performance talk about some port balance violations and disk utilization violations Initially we set global policies within OnCommand Insight so we can monitor the environment alert us when something falls outside those policies There are several policies available
Select Policy from the top menu bar
Select Global Polices
What thresholds can you set from here
Select Violation Severity from the left menu of the global policy window
12 Insert Technical Report Title Here
What Severities can you set for each threshold
Select Violation Notification What are the possible violation notification options
32 FIBRE CHANNEL POLICY SETTINGS
We set fibre channel policies to determine and keep track of path redundancy We can set options such as no SPF (single point of failure) or redundant We can set it for minimum number of ports on the host the storage and the maximum number switch hops We can set exceptions around different volume types that would not necessarily require redundancy like BCVs R1 and R2 We can also set policy exceptions around smaller volumes it wouldnt have redundancy like EMC gatekeepers
From the Policy menu on the menu bar Select Fibre Channel Policy
13 Insert Technical Report Title Here
What type of redundancy can you set from here
What is the default number of ports Volume Type Exceptions
What Volume exemptions can you select
You can set redundancy policies on physical storage behind a virtualizer (Backend Path)
14 Insert Technical Report Title Here
33 VIOLATIONS BROWSER
Letrsquos take a look at the violation browser The Violations Browser allows us to see the impact of the violations on your business elements in one place and helps us manage the violations on all of those global policies you saw above
From the Assurance Menu on the left select Violations Browser (Note At this point you might want to increase your viewing real estate here by closing down the navigation pane on the left To close the navigation pane go to ToolsgtOnCommand Insight Setting and uncheck the Navigation Pane box You can use the same process to turn it back on later)
Back in the Violations browser expand the All Violations explorer to reveal the violation categories This shows violations like Datastore Latency Disk Utilization Volume and internal volume IOPS and response times Port Balance violations etc These should look familiar to you from the global policies that we just reviewed a few minutes ago
As you can see you can look at the violations all pile here as you can see over 12000 violations (NOTE Donrsquot let this scare you Usually most violations are caused by events that create multiple violations per event You fix one event and a bunch of these go away) We can see detail on each of these violations by performing the following
15 Insert Technical Report Title Here
Select the show violations impact icon to view all the violations in context by business entity application host virtual machine datacenter etc
Expand the Impacted Business Entity explorer and drill down to Earth Thermal Tracking
Expand and select Disk Utilization
Sort the Description column descending Now you can see Element Description Severity and Violation type
Select the top violation element called Disk DISK-14 of Storage Virtualizer
In the Impact Details microview toggle the Host Virtual Machines Applications and Business Entities and Storage icons to view the details of the impact of the violations Here we see the impact on one application called City Limits owned by one business entity called Green CorpgtAlternate EnergygtGeothermalgtEarth Thermal Tracking on one host However 10 virtual machines are affected by this violation from one array
The chart in the Violation Event microview shows the history and trending of the utilization on this one disk over time From here we can analyze the performance details as wersquoll see later in this demo REVIEW
What are the categories that show impact of violations
What Business Entity is impacted by these violations
What is the Utilization of this disk
Which hosts are being affected by this violation
Which VMs are being affected by the violation
Wersquoll do some troubleshooting using these violations and Analyze Performance later
16 Insert Technical Report Title Here
34 PORT BALANCE VIOLATIONS
Letrsquos take a look at port balance violations These are violations showing imbalance on SAN traffic from hosts Arrays and Switches These are not performance related violations
Using either the Navigation Pane or dropdown menu Open Assurance
Select Port Balance Violations
Group by Type then Device
Expand Hosts and sort the Device column ascending
Select device Host nj_exch002 Note that this host has a balance index of 81 That means the difference in distribution of traffic (or the load) between the HBArsquos on this host Any index over 50 indicates significantly unbalanced ports on a device
Select the Switch Port Performance microview Note that over 88 of the traffic distribution is going across one HBA in only 11 of the traffic going across the other A failure on the heavily used HBA could choke that application This could indicate that port balancing software is not configured on this server not configured at all or not installed
Now collapse the Hosts and expand the Storage devices
Select various storage devices and view the traffic distribution in the switch port performance microview to understand the balance across the storage ports
These port balance violations provide valuable data on how your environment is configure and optimized It allows you to quickly determine where you need to optimize your configurations based on actual usage These are balance violations within each device not necessarily traffic related performance violations Wersquoll look at performance a bit later
17 Insert Technical Report Title Here
35 DISK UTILIZATION VIOLATIONS
We looked Disk Utilization Violations in the Violations Browser a few minutes ago because we got alerted to a violation But if you didnt look at the error you can go directly to disk utilization violations here and troubleshoot your issues the similar to how we did in the Violations Browser The difference is the Violations Browser breaks down the violations by how it impacts your Business Entities Applications Data Centers etc so you can troubleshoot by business priorities while Disk Utilization Violations lets you easily see your most critical utilization issues and troubleshoot from the disk utilization violation back to the hosts You can also add columns to show applications and Business Entities if you want
Letrsquos take a look how you can use Disk Utilization Violations to quickly identify and drilldown to where your issue is
Select Disk Utilization Violations from the Assurance Menu
Sort the Utilization column descending to bring your heaviest utilization to the top
Here you see the utilization of each disk that exceeded the Disk Utilization threshold we
set earlier in our Global Policy In relation to each violation you see the disk array
hosts that access this disk the date and time the violation occurred the percentage of
utilization as well as IOPS and Throughput
Select the disk with the highest utilization
Now select the Volume Usage of Disk microview to get details on volume usage and
performance
Sort the Disk IOPS descending and select the top Volume Usage of Disk Here I see
the volume with the highest usage along with the disk throughput and percentage info by
volume and host
Exceeded Threshold
Host with highest IOPS
Switch Traffic OK
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
10 Insert Technical Report Title Here
to the FA port on the array and all the way to the volume To fix the violation the administrator needs to reverse those changes and that violation automatically resolves itself
Close the Analyze Violation screen
VIRTUAL MACHINES AND DATA STORE
OnCommand Insight gives you a complete inventory of the VM environment This is discovered through a Data source configured to talk with Virtual Center Details include all internal configurations and elements of the VMs ESX hosts the technologies as well as all the information to correlate the path from VMs to its storage and details about how that storage is configured to the disk including performance (if you have OnCommand Insight Perform License installed) Wersquoll discuss performance later in this demo Select Datastore from the Inventory menu
Scroll down the main view and select DS-30 in the Main view
Display the Topology and Virtual Machine micro views to show which VMs are using Datastore DS-30
Toggle through the micro view icons below to show the details of VM VMDK capacities the storage the backend storage in the arrays and resources You can see full end to end visibility from the Host through a virtualized storage environment to the backend storage
Note You can also select Hosts and Virtual Machines from the Inventory Menu and cycle through the microviews noting the end to end visibility
Donrsquot forget to use the Microview ICONS
11 Insert Technical Report Title Here
3 ASSURANCE
31 APPLYING POLICIES TO MONITOR CONFIGURATION AND PERFORMANCE
Now that weve gathered the inventory and pull all this information into the Database lets start to apply policies so we can monitor and get alerts on violations Wersquoll talk about setting global policies changes and the new violation browser Wersquoll show how we can analyze performance talk about some port balance violations and disk utilization violations Initially we set global policies within OnCommand Insight so we can monitor the environment alert us when something falls outside those policies There are several policies available
Select Policy from the top menu bar
Select Global Polices
What thresholds can you set from here
Select Violation Severity from the left menu of the global policy window
12 Insert Technical Report Title Here
What Severities can you set for each threshold
Select Violation Notification What are the possible violation notification options
32 FIBRE CHANNEL POLICY SETTINGS
We set fibre channel policies to determine and keep track of path redundancy We can set options such as no SPF (single point of failure) or redundant We can set it for minimum number of ports on the host the storage and the maximum number switch hops We can set exceptions around different volume types that would not necessarily require redundancy like BCVs R1 and R2 We can also set policy exceptions around smaller volumes it wouldnt have redundancy like EMC gatekeepers
From the Policy menu on the menu bar Select Fibre Channel Policy
13 Insert Technical Report Title Here
What type of redundancy can you set from here
What is the default number of ports Volume Type Exceptions
What Volume exemptions can you select
You can set redundancy policies on physical storage behind a virtualizer (Backend Path)
14 Insert Technical Report Title Here
33 VIOLATIONS BROWSER
Letrsquos take a look at the violation browser The Violations Browser allows us to see the impact of the violations on your business elements in one place and helps us manage the violations on all of those global policies you saw above
From the Assurance Menu on the left select Violations Browser (Note At this point you might want to increase your viewing real estate here by closing down the navigation pane on the left To close the navigation pane go to ToolsgtOnCommand Insight Setting and uncheck the Navigation Pane box You can use the same process to turn it back on later)
Back in the Violations browser expand the All Violations explorer to reveal the violation categories This shows violations like Datastore Latency Disk Utilization Volume and internal volume IOPS and response times Port Balance violations etc These should look familiar to you from the global policies that we just reviewed a few minutes ago
As you can see you can look at the violations all pile here as you can see over 12000 violations (NOTE Donrsquot let this scare you Usually most violations are caused by events that create multiple violations per event You fix one event and a bunch of these go away) We can see detail on each of these violations by performing the following
15 Insert Technical Report Title Here
Select the show violations impact icon to view all the violations in context by business entity application host virtual machine datacenter etc
Expand the Impacted Business Entity explorer and drill down to Earth Thermal Tracking
Expand and select Disk Utilization
Sort the Description column descending Now you can see Element Description Severity and Violation type
Select the top violation element called Disk DISK-14 of Storage Virtualizer
In the Impact Details microview toggle the Host Virtual Machines Applications and Business Entities and Storage icons to view the details of the impact of the violations Here we see the impact on one application called City Limits owned by one business entity called Green CorpgtAlternate EnergygtGeothermalgtEarth Thermal Tracking on one host However 10 virtual machines are affected by this violation from one array
The chart in the Violation Event microview shows the history and trending of the utilization on this one disk over time From here we can analyze the performance details as wersquoll see later in this demo REVIEW
What are the categories that show impact of violations
What Business Entity is impacted by these violations
What is the Utilization of this disk
Which hosts are being affected by this violation
Which VMs are being affected by the violation
Wersquoll do some troubleshooting using these violations and Analyze Performance later
16 Insert Technical Report Title Here
34 PORT BALANCE VIOLATIONS
Letrsquos take a look at port balance violations These are violations showing imbalance on SAN traffic from hosts Arrays and Switches These are not performance related violations
Using either the Navigation Pane or dropdown menu Open Assurance
Select Port Balance Violations
Group by Type then Device
Expand Hosts and sort the Device column ascending
Select device Host nj_exch002 Note that this host has a balance index of 81 That means the difference in distribution of traffic (or the load) between the HBArsquos on this host Any index over 50 indicates significantly unbalanced ports on a device
Select the Switch Port Performance microview Note that over 88 of the traffic distribution is going across one HBA in only 11 of the traffic going across the other A failure on the heavily used HBA could choke that application This could indicate that port balancing software is not configured on this server not configured at all or not installed
Now collapse the Hosts and expand the Storage devices
Select various storage devices and view the traffic distribution in the switch port performance microview to understand the balance across the storage ports
These port balance violations provide valuable data on how your environment is configure and optimized It allows you to quickly determine where you need to optimize your configurations based on actual usage These are balance violations within each device not necessarily traffic related performance violations Wersquoll look at performance a bit later
17 Insert Technical Report Title Here
35 DISK UTILIZATION VIOLATIONS
We looked Disk Utilization Violations in the Violations Browser a few minutes ago because we got alerted to a violation But if you didnt look at the error you can go directly to disk utilization violations here and troubleshoot your issues the similar to how we did in the Violations Browser The difference is the Violations Browser breaks down the violations by how it impacts your Business Entities Applications Data Centers etc so you can troubleshoot by business priorities while Disk Utilization Violations lets you easily see your most critical utilization issues and troubleshoot from the disk utilization violation back to the hosts You can also add columns to show applications and Business Entities if you want
Letrsquos take a look how you can use Disk Utilization Violations to quickly identify and drilldown to where your issue is
Select Disk Utilization Violations from the Assurance Menu
Sort the Utilization column descending to bring your heaviest utilization to the top
Here you see the utilization of each disk that exceeded the Disk Utilization threshold we
set earlier in our Global Policy In relation to each violation you see the disk array
hosts that access this disk the date and time the violation occurred the percentage of
utilization as well as IOPS and Throughput
Select the disk with the highest utilization
Now select the Volume Usage of Disk microview to get details on volume usage and
performance
Sort the Disk IOPS descending and select the top Volume Usage of Disk Here I see
the volume with the highest usage along with the disk throughput and percentage info by
volume and host
Exceeded Threshold
Host with highest IOPS
Switch Traffic OK
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
11 Insert Technical Report Title Here
3 ASSURANCE
31 APPLYING POLICIES TO MONITOR CONFIGURATION AND PERFORMANCE
Now that weve gathered the inventory and pull all this information into the Database lets start to apply policies so we can monitor and get alerts on violations Wersquoll talk about setting global policies changes and the new violation browser Wersquoll show how we can analyze performance talk about some port balance violations and disk utilization violations Initially we set global policies within OnCommand Insight so we can monitor the environment alert us when something falls outside those policies There are several policies available
Select Policy from the top menu bar
Select Global Polices
What thresholds can you set from here
Select Violation Severity from the left menu of the global policy window
12 Insert Technical Report Title Here
What Severities can you set for each threshold
Select Violation Notification What are the possible violation notification options
32 FIBRE CHANNEL POLICY SETTINGS
We set fibre channel policies to determine and keep track of path redundancy We can set options such as no SPF (single point of failure) or redundant We can set it for minimum number of ports on the host the storage and the maximum number switch hops We can set exceptions around different volume types that would not necessarily require redundancy like BCVs R1 and R2 We can also set policy exceptions around smaller volumes it wouldnt have redundancy like EMC gatekeepers
From the Policy menu on the menu bar Select Fibre Channel Policy
13 Insert Technical Report Title Here
What type of redundancy can you set from here
What is the default number of ports Volume Type Exceptions
What Volume exemptions can you select
You can set redundancy policies on physical storage behind a virtualizer (Backend Path)
14 Insert Technical Report Title Here
33 VIOLATIONS BROWSER
Letrsquos take a look at the violation browser The Violations Browser allows us to see the impact of the violations on your business elements in one place and helps us manage the violations on all of those global policies you saw above
From the Assurance Menu on the left select Violations Browser (Note At this point you might want to increase your viewing real estate here by closing down the navigation pane on the left To close the navigation pane go to ToolsgtOnCommand Insight Setting and uncheck the Navigation Pane box You can use the same process to turn it back on later)
Back in the Violations browser expand the All Violations explorer to reveal the violation categories This shows violations like Datastore Latency Disk Utilization Volume and internal volume IOPS and response times Port Balance violations etc These should look familiar to you from the global policies that we just reviewed a few minutes ago
As you can see you can look at the violations all pile here as you can see over 12000 violations (NOTE Donrsquot let this scare you Usually most violations are caused by events that create multiple violations per event You fix one event and a bunch of these go away) We can see detail on each of these violations by performing the following
15 Insert Technical Report Title Here
Select the show violations impact icon to view all the violations in context by business entity application host virtual machine datacenter etc
Expand the Impacted Business Entity explorer and drill down to Earth Thermal Tracking
Expand and select Disk Utilization
Sort the Description column descending Now you can see Element Description Severity and Violation type
Select the top violation element called Disk DISK-14 of Storage Virtualizer
In the Impact Details microview toggle the Host Virtual Machines Applications and Business Entities and Storage icons to view the details of the impact of the violations Here we see the impact on one application called City Limits owned by one business entity called Green CorpgtAlternate EnergygtGeothermalgtEarth Thermal Tracking on one host However 10 virtual machines are affected by this violation from one array
The chart in the Violation Event microview shows the history and trending of the utilization on this one disk over time From here we can analyze the performance details as wersquoll see later in this demo REVIEW
What are the categories that show impact of violations
What Business Entity is impacted by these violations
What is the Utilization of this disk
Which hosts are being affected by this violation
Which VMs are being affected by the violation
Wersquoll do some troubleshooting using these violations and Analyze Performance later
16 Insert Technical Report Title Here
34 PORT BALANCE VIOLATIONS
Letrsquos take a look at port balance violations These are violations showing imbalance on SAN traffic from hosts Arrays and Switches These are not performance related violations
Using either the Navigation Pane or dropdown menu Open Assurance
Select Port Balance Violations
Group by Type then Device
Expand Hosts and sort the Device column ascending
Select device Host nj_exch002 Note that this host has a balance index of 81 That means the difference in distribution of traffic (or the load) between the HBArsquos on this host Any index over 50 indicates significantly unbalanced ports on a device
Select the Switch Port Performance microview Note that over 88 of the traffic distribution is going across one HBA in only 11 of the traffic going across the other A failure on the heavily used HBA could choke that application This could indicate that port balancing software is not configured on this server not configured at all or not installed
Now collapse the Hosts and expand the Storage devices
Select various storage devices and view the traffic distribution in the switch port performance microview to understand the balance across the storage ports
These port balance violations provide valuable data on how your environment is configure and optimized It allows you to quickly determine where you need to optimize your configurations based on actual usage These are balance violations within each device not necessarily traffic related performance violations Wersquoll look at performance a bit later
17 Insert Technical Report Title Here
35 DISK UTILIZATION VIOLATIONS
We looked Disk Utilization Violations in the Violations Browser a few minutes ago because we got alerted to a violation But if you didnt look at the error you can go directly to disk utilization violations here and troubleshoot your issues the similar to how we did in the Violations Browser The difference is the Violations Browser breaks down the violations by how it impacts your Business Entities Applications Data Centers etc so you can troubleshoot by business priorities while Disk Utilization Violations lets you easily see your most critical utilization issues and troubleshoot from the disk utilization violation back to the hosts You can also add columns to show applications and Business Entities if you want
Letrsquos take a look how you can use Disk Utilization Violations to quickly identify and drilldown to where your issue is
Select Disk Utilization Violations from the Assurance Menu
Sort the Utilization column descending to bring your heaviest utilization to the top
Here you see the utilization of each disk that exceeded the Disk Utilization threshold we
set earlier in our Global Policy In relation to each violation you see the disk array
hosts that access this disk the date and time the violation occurred the percentage of
utilization as well as IOPS and Throughput
Select the disk with the highest utilization
Now select the Volume Usage of Disk microview to get details on volume usage and
performance
Sort the Disk IOPS descending and select the top Volume Usage of Disk Here I see
the volume with the highest usage along with the disk throughput and percentage info by
volume and host
Exceeded Threshold
Host with highest IOPS
Switch Traffic OK
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
12 Insert Technical Report Title Here
What Severities can you set for each threshold
Select Violation Notification What are the possible violation notification options
32 FIBRE CHANNEL POLICY SETTINGS
We set fibre channel policies to determine and keep track of path redundancy We can set options such as no SPF (single point of failure) or redundant We can set it for minimum number of ports on the host the storage and the maximum number switch hops We can set exceptions around different volume types that would not necessarily require redundancy like BCVs R1 and R2 We can also set policy exceptions around smaller volumes it wouldnt have redundancy like EMC gatekeepers
From the Policy menu on the menu bar Select Fibre Channel Policy
13 Insert Technical Report Title Here
What type of redundancy can you set from here
What is the default number of ports Volume Type Exceptions
What Volume exemptions can you select
You can set redundancy policies on physical storage behind a virtualizer (Backend Path)
14 Insert Technical Report Title Here
33 VIOLATIONS BROWSER
Letrsquos take a look at the violation browser The Violations Browser allows us to see the impact of the violations on your business elements in one place and helps us manage the violations on all of those global policies you saw above
From the Assurance Menu on the left select Violations Browser (Note At this point you might want to increase your viewing real estate here by closing down the navigation pane on the left To close the navigation pane go to ToolsgtOnCommand Insight Setting and uncheck the Navigation Pane box You can use the same process to turn it back on later)
Back in the Violations browser expand the All Violations explorer to reveal the violation categories This shows violations like Datastore Latency Disk Utilization Volume and internal volume IOPS and response times Port Balance violations etc These should look familiar to you from the global policies that we just reviewed a few minutes ago
As you can see you can look at the violations all pile here as you can see over 12000 violations (NOTE Donrsquot let this scare you Usually most violations are caused by events that create multiple violations per event You fix one event and a bunch of these go away) We can see detail on each of these violations by performing the following
15 Insert Technical Report Title Here
Select the show violations impact icon to view all the violations in context by business entity application host virtual machine datacenter etc
Expand the Impacted Business Entity explorer and drill down to Earth Thermal Tracking
Expand and select Disk Utilization
Sort the Description column descending Now you can see Element Description Severity and Violation type
Select the top violation element called Disk DISK-14 of Storage Virtualizer
In the Impact Details microview toggle the Host Virtual Machines Applications and Business Entities and Storage icons to view the details of the impact of the violations Here we see the impact on one application called City Limits owned by one business entity called Green CorpgtAlternate EnergygtGeothermalgtEarth Thermal Tracking on one host However 10 virtual machines are affected by this violation from one array
The chart in the Violation Event microview shows the history and trending of the utilization on this one disk over time From here we can analyze the performance details as wersquoll see later in this demo REVIEW
What are the categories that show impact of violations
What Business Entity is impacted by these violations
What is the Utilization of this disk
Which hosts are being affected by this violation
Which VMs are being affected by the violation
Wersquoll do some troubleshooting using these violations and Analyze Performance later
16 Insert Technical Report Title Here
34 PORT BALANCE VIOLATIONS
Letrsquos take a look at port balance violations These are violations showing imbalance on SAN traffic from hosts Arrays and Switches These are not performance related violations
Using either the Navigation Pane or dropdown menu Open Assurance
Select Port Balance Violations
Group by Type then Device
Expand Hosts and sort the Device column ascending
Select device Host nj_exch002 Note that this host has a balance index of 81 That means the difference in distribution of traffic (or the load) between the HBArsquos on this host Any index over 50 indicates significantly unbalanced ports on a device
Select the Switch Port Performance microview Note that over 88 of the traffic distribution is going across one HBA in only 11 of the traffic going across the other A failure on the heavily used HBA could choke that application This could indicate that port balancing software is not configured on this server not configured at all or not installed
Now collapse the Hosts and expand the Storage devices
Select various storage devices and view the traffic distribution in the switch port performance microview to understand the balance across the storage ports
These port balance violations provide valuable data on how your environment is configure and optimized It allows you to quickly determine where you need to optimize your configurations based on actual usage These are balance violations within each device not necessarily traffic related performance violations Wersquoll look at performance a bit later
17 Insert Technical Report Title Here
35 DISK UTILIZATION VIOLATIONS
We looked Disk Utilization Violations in the Violations Browser a few minutes ago because we got alerted to a violation But if you didnt look at the error you can go directly to disk utilization violations here and troubleshoot your issues the similar to how we did in the Violations Browser The difference is the Violations Browser breaks down the violations by how it impacts your Business Entities Applications Data Centers etc so you can troubleshoot by business priorities while Disk Utilization Violations lets you easily see your most critical utilization issues and troubleshoot from the disk utilization violation back to the hosts You can also add columns to show applications and Business Entities if you want
Letrsquos take a look how you can use Disk Utilization Violations to quickly identify and drilldown to where your issue is
Select Disk Utilization Violations from the Assurance Menu
Sort the Utilization column descending to bring your heaviest utilization to the top
Here you see the utilization of each disk that exceeded the Disk Utilization threshold we
set earlier in our Global Policy In relation to each violation you see the disk array
hosts that access this disk the date and time the violation occurred the percentage of
utilization as well as IOPS and Throughput
Select the disk with the highest utilization
Now select the Volume Usage of Disk microview to get details on volume usage and
performance
Sort the Disk IOPS descending and select the top Volume Usage of Disk Here I see
the volume with the highest usage along with the disk throughput and percentage info by
volume and host
Exceeded Threshold
Host with highest IOPS
Switch Traffic OK
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
13 Insert Technical Report Title Here
What type of redundancy can you set from here
What is the default number of ports Volume Type Exceptions
What Volume exemptions can you select
You can set redundancy policies on physical storage behind a virtualizer (Backend Path)
14 Insert Technical Report Title Here
33 VIOLATIONS BROWSER
Letrsquos take a look at the violation browser The Violations Browser allows us to see the impact of the violations on your business elements in one place and helps us manage the violations on all of those global policies you saw above
From the Assurance Menu on the left select Violations Browser (Note At this point you might want to increase your viewing real estate here by closing down the navigation pane on the left To close the navigation pane go to ToolsgtOnCommand Insight Setting and uncheck the Navigation Pane box You can use the same process to turn it back on later)
Back in the Violations browser expand the All Violations explorer to reveal the violation categories This shows violations like Datastore Latency Disk Utilization Volume and internal volume IOPS and response times Port Balance violations etc These should look familiar to you from the global policies that we just reviewed a few minutes ago
As you can see you can look at the violations all pile here as you can see over 12000 violations (NOTE Donrsquot let this scare you Usually most violations are caused by events that create multiple violations per event You fix one event and a bunch of these go away) We can see detail on each of these violations by performing the following
15 Insert Technical Report Title Here
Select the show violations impact icon to view all the violations in context by business entity application host virtual machine datacenter etc
Expand the Impacted Business Entity explorer and drill down to Earth Thermal Tracking
Expand and select Disk Utilization
Sort the Description column descending Now you can see Element Description Severity and Violation type
Select the top violation element called Disk DISK-14 of Storage Virtualizer
In the Impact Details microview toggle the Host Virtual Machines Applications and Business Entities and Storage icons to view the details of the impact of the violations Here we see the impact on one application called City Limits owned by one business entity called Green CorpgtAlternate EnergygtGeothermalgtEarth Thermal Tracking on one host However 10 virtual machines are affected by this violation from one array
The chart in the Violation Event microview shows the history and trending of the utilization on this one disk over time From here we can analyze the performance details as wersquoll see later in this demo REVIEW
What are the categories that show impact of violations
What Business Entity is impacted by these violations
What is the Utilization of this disk
Which hosts are being affected by this violation
Which VMs are being affected by the violation
Wersquoll do some troubleshooting using these violations and Analyze Performance later
16 Insert Technical Report Title Here
34 PORT BALANCE VIOLATIONS
Letrsquos take a look at port balance violations These are violations showing imbalance on SAN traffic from hosts Arrays and Switches These are not performance related violations
Using either the Navigation Pane or dropdown menu Open Assurance
Select Port Balance Violations
Group by Type then Device
Expand Hosts and sort the Device column ascending
Select device Host nj_exch002 Note that this host has a balance index of 81 That means the difference in distribution of traffic (or the load) between the HBArsquos on this host Any index over 50 indicates significantly unbalanced ports on a device
Select the Switch Port Performance microview Note that over 88 of the traffic distribution is going across one HBA in only 11 of the traffic going across the other A failure on the heavily used HBA could choke that application This could indicate that port balancing software is not configured on this server not configured at all or not installed
Now collapse the Hosts and expand the Storage devices
Select various storage devices and view the traffic distribution in the switch port performance microview to understand the balance across the storage ports
These port balance violations provide valuable data on how your environment is configure and optimized It allows you to quickly determine where you need to optimize your configurations based on actual usage These are balance violations within each device not necessarily traffic related performance violations Wersquoll look at performance a bit later
17 Insert Technical Report Title Here
35 DISK UTILIZATION VIOLATIONS
We looked Disk Utilization Violations in the Violations Browser a few minutes ago because we got alerted to a violation But if you didnt look at the error you can go directly to disk utilization violations here and troubleshoot your issues the similar to how we did in the Violations Browser The difference is the Violations Browser breaks down the violations by how it impacts your Business Entities Applications Data Centers etc so you can troubleshoot by business priorities while Disk Utilization Violations lets you easily see your most critical utilization issues and troubleshoot from the disk utilization violation back to the hosts You can also add columns to show applications and Business Entities if you want
Letrsquos take a look how you can use Disk Utilization Violations to quickly identify and drilldown to where your issue is
Select Disk Utilization Violations from the Assurance Menu
Sort the Utilization column descending to bring your heaviest utilization to the top
Here you see the utilization of each disk that exceeded the Disk Utilization threshold we
set earlier in our Global Policy In relation to each violation you see the disk array
hosts that access this disk the date and time the violation occurred the percentage of
utilization as well as IOPS and Throughput
Select the disk with the highest utilization
Now select the Volume Usage of Disk microview to get details on volume usage and
performance
Sort the Disk IOPS descending and select the top Volume Usage of Disk Here I see
the volume with the highest usage along with the disk throughput and percentage info by
volume and host
Exceeded Threshold
Host with highest IOPS
Switch Traffic OK
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
14 Insert Technical Report Title Here
33 VIOLATIONS BROWSER
Letrsquos take a look at the violation browser The Violations Browser allows us to see the impact of the violations on your business elements in one place and helps us manage the violations on all of those global policies you saw above
From the Assurance Menu on the left select Violations Browser (Note At this point you might want to increase your viewing real estate here by closing down the navigation pane on the left To close the navigation pane go to ToolsgtOnCommand Insight Setting and uncheck the Navigation Pane box You can use the same process to turn it back on later)
Back in the Violations browser expand the All Violations explorer to reveal the violation categories This shows violations like Datastore Latency Disk Utilization Volume and internal volume IOPS and response times Port Balance violations etc These should look familiar to you from the global policies that we just reviewed a few minutes ago
As you can see you can look at the violations all pile here as you can see over 12000 violations (NOTE Donrsquot let this scare you Usually most violations are caused by events that create multiple violations per event You fix one event and a bunch of these go away) We can see detail on each of these violations by performing the following
15 Insert Technical Report Title Here
Select the show violations impact icon to view all the violations in context by business entity application host virtual machine datacenter etc
Expand the Impacted Business Entity explorer and drill down to Earth Thermal Tracking
Expand and select Disk Utilization
Sort the Description column descending Now you can see Element Description Severity and Violation type
Select the top violation element called Disk DISK-14 of Storage Virtualizer
In the Impact Details microview toggle the Host Virtual Machines Applications and Business Entities and Storage icons to view the details of the impact of the violations Here we see the impact on one application called City Limits owned by one business entity called Green CorpgtAlternate EnergygtGeothermalgtEarth Thermal Tracking on one host However 10 virtual machines are affected by this violation from one array
The chart in the Violation Event microview shows the history and trending of the utilization on this one disk over time From here we can analyze the performance details as wersquoll see later in this demo REVIEW
What are the categories that show impact of violations
What Business Entity is impacted by these violations
What is the Utilization of this disk
Which hosts are being affected by this violation
Which VMs are being affected by the violation
Wersquoll do some troubleshooting using these violations and Analyze Performance later
16 Insert Technical Report Title Here
34 PORT BALANCE VIOLATIONS
Letrsquos take a look at port balance violations These are violations showing imbalance on SAN traffic from hosts Arrays and Switches These are not performance related violations
Using either the Navigation Pane or dropdown menu Open Assurance
Select Port Balance Violations
Group by Type then Device
Expand Hosts and sort the Device column ascending
Select device Host nj_exch002 Note that this host has a balance index of 81 That means the difference in distribution of traffic (or the load) between the HBArsquos on this host Any index over 50 indicates significantly unbalanced ports on a device
Select the Switch Port Performance microview Note that over 88 of the traffic distribution is going across one HBA in only 11 of the traffic going across the other A failure on the heavily used HBA could choke that application This could indicate that port balancing software is not configured on this server not configured at all or not installed
Now collapse the Hosts and expand the Storage devices
Select various storage devices and view the traffic distribution in the switch port performance microview to understand the balance across the storage ports
These port balance violations provide valuable data on how your environment is configure and optimized It allows you to quickly determine where you need to optimize your configurations based on actual usage These are balance violations within each device not necessarily traffic related performance violations Wersquoll look at performance a bit later
17 Insert Technical Report Title Here
35 DISK UTILIZATION VIOLATIONS
We looked Disk Utilization Violations in the Violations Browser a few minutes ago because we got alerted to a violation But if you didnt look at the error you can go directly to disk utilization violations here and troubleshoot your issues the similar to how we did in the Violations Browser The difference is the Violations Browser breaks down the violations by how it impacts your Business Entities Applications Data Centers etc so you can troubleshoot by business priorities while Disk Utilization Violations lets you easily see your most critical utilization issues and troubleshoot from the disk utilization violation back to the hosts You can also add columns to show applications and Business Entities if you want
Letrsquos take a look how you can use Disk Utilization Violations to quickly identify and drilldown to where your issue is
Select Disk Utilization Violations from the Assurance Menu
Sort the Utilization column descending to bring your heaviest utilization to the top
Here you see the utilization of each disk that exceeded the Disk Utilization threshold we
set earlier in our Global Policy In relation to each violation you see the disk array
hosts that access this disk the date and time the violation occurred the percentage of
utilization as well as IOPS and Throughput
Select the disk with the highest utilization
Now select the Volume Usage of Disk microview to get details on volume usage and
performance
Sort the Disk IOPS descending and select the top Volume Usage of Disk Here I see
the volume with the highest usage along with the disk throughput and percentage info by
volume and host
Exceeded Threshold
Host with highest IOPS
Switch Traffic OK
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
15 Insert Technical Report Title Here
Select the show violations impact icon to view all the violations in context by business entity application host virtual machine datacenter etc
Expand the Impacted Business Entity explorer and drill down to Earth Thermal Tracking
Expand and select Disk Utilization
Sort the Description column descending Now you can see Element Description Severity and Violation type
Select the top violation element called Disk DISK-14 of Storage Virtualizer
In the Impact Details microview toggle the Host Virtual Machines Applications and Business Entities and Storage icons to view the details of the impact of the violations Here we see the impact on one application called City Limits owned by one business entity called Green CorpgtAlternate EnergygtGeothermalgtEarth Thermal Tracking on one host However 10 virtual machines are affected by this violation from one array
The chart in the Violation Event microview shows the history and trending of the utilization on this one disk over time From here we can analyze the performance details as wersquoll see later in this demo REVIEW
What are the categories that show impact of violations
What Business Entity is impacted by these violations
What is the Utilization of this disk
Which hosts are being affected by this violation
Which VMs are being affected by the violation
Wersquoll do some troubleshooting using these violations and Analyze Performance later
16 Insert Technical Report Title Here
34 PORT BALANCE VIOLATIONS
Letrsquos take a look at port balance violations These are violations showing imbalance on SAN traffic from hosts Arrays and Switches These are not performance related violations
Using either the Navigation Pane or dropdown menu Open Assurance
Select Port Balance Violations
Group by Type then Device
Expand Hosts and sort the Device column ascending
Select device Host nj_exch002 Note that this host has a balance index of 81 That means the difference in distribution of traffic (or the load) between the HBArsquos on this host Any index over 50 indicates significantly unbalanced ports on a device
Select the Switch Port Performance microview Note that over 88 of the traffic distribution is going across one HBA in only 11 of the traffic going across the other A failure on the heavily used HBA could choke that application This could indicate that port balancing software is not configured on this server not configured at all or not installed
Now collapse the Hosts and expand the Storage devices
Select various storage devices and view the traffic distribution in the switch port performance microview to understand the balance across the storage ports
These port balance violations provide valuable data on how your environment is configure and optimized It allows you to quickly determine where you need to optimize your configurations based on actual usage These are balance violations within each device not necessarily traffic related performance violations Wersquoll look at performance a bit later
17 Insert Technical Report Title Here
35 DISK UTILIZATION VIOLATIONS
We looked Disk Utilization Violations in the Violations Browser a few minutes ago because we got alerted to a violation But if you didnt look at the error you can go directly to disk utilization violations here and troubleshoot your issues the similar to how we did in the Violations Browser The difference is the Violations Browser breaks down the violations by how it impacts your Business Entities Applications Data Centers etc so you can troubleshoot by business priorities while Disk Utilization Violations lets you easily see your most critical utilization issues and troubleshoot from the disk utilization violation back to the hosts You can also add columns to show applications and Business Entities if you want
Letrsquos take a look how you can use Disk Utilization Violations to quickly identify and drilldown to where your issue is
Select Disk Utilization Violations from the Assurance Menu
Sort the Utilization column descending to bring your heaviest utilization to the top
Here you see the utilization of each disk that exceeded the Disk Utilization threshold we
set earlier in our Global Policy In relation to each violation you see the disk array
hosts that access this disk the date and time the violation occurred the percentage of
utilization as well as IOPS and Throughput
Select the disk with the highest utilization
Now select the Volume Usage of Disk microview to get details on volume usage and
performance
Sort the Disk IOPS descending and select the top Volume Usage of Disk Here I see
the volume with the highest usage along with the disk throughput and percentage info by
volume and host
Exceeded Threshold
Host with highest IOPS
Switch Traffic OK
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
16 Insert Technical Report Title Here
34 PORT BALANCE VIOLATIONS
Letrsquos take a look at port balance violations These are violations showing imbalance on SAN traffic from hosts Arrays and Switches These are not performance related violations
Using either the Navigation Pane or dropdown menu Open Assurance
Select Port Balance Violations
Group by Type then Device
Expand Hosts and sort the Device column ascending
Select device Host nj_exch002 Note that this host has a balance index of 81 That means the difference in distribution of traffic (or the load) between the HBArsquos on this host Any index over 50 indicates significantly unbalanced ports on a device
Select the Switch Port Performance microview Note that over 88 of the traffic distribution is going across one HBA in only 11 of the traffic going across the other A failure on the heavily used HBA could choke that application This could indicate that port balancing software is not configured on this server not configured at all or not installed
Now collapse the Hosts and expand the Storage devices
Select various storage devices and view the traffic distribution in the switch port performance microview to understand the balance across the storage ports
These port balance violations provide valuable data on how your environment is configure and optimized It allows you to quickly determine where you need to optimize your configurations based on actual usage These are balance violations within each device not necessarily traffic related performance violations Wersquoll look at performance a bit later
17 Insert Technical Report Title Here
35 DISK UTILIZATION VIOLATIONS
We looked Disk Utilization Violations in the Violations Browser a few minutes ago because we got alerted to a violation But if you didnt look at the error you can go directly to disk utilization violations here and troubleshoot your issues the similar to how we did in the Violations Browser The difference is the Violations Browser breaks down the violations by how it impacts your Business Entities Applications Data Centers etc so you can troubleshoot by business priorities while Disk Utilization Violations lets you easily see your most critical utilization issues and troubleshoot from the disk utilization violation back to the hosts You can also add columns to show applications and Business Entities if you want
Letrsquos take a look how you can use Disk Utilization Violations to quickly identify and drilldown to where your issue is
Select Disk Utilization Violations from the Assurance Menu
Sort the Utilization column descending to bring your heaviest utilization to the top
Here you see the utilization of each disk that exceeded the Disk Utilization threshold we
set earlier in our Global Policy In relation to each violation you see the disk array
hosts that access this disk the date and time the violation occurred the percentage of
utilization as well as IOPS and Throughput
Select the disk with the highest utilization
Now select the Volume Usage of Disk microview to get details on volume usage and
performance
Sort the Disk IOPS descending and select the top Volume Usage of Disk Here I see
the volume with the highest usage along with the disk throughput and percentage info by
volume and host
Exceeded Threshold
Host with highest IOPS
Switch Traffic OK
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
17 Insert Technical Report Title Here
35 DISK UTILIZATION VIOLATIONS
We looked Disk Utilization Violations in the Violations Browser a few minutes ago because we got alerted to a violation But if you didnt look at the error you can go directly to disk utilization violations here and troubleshoot your issues the similar to how we did in the Violations Browser The difference is the Violations Browser breaks down the violations by how it impacts your Business Entities Applications Data Centers etc so you can troubleshoot by business priorities while Disk Utilization Violations lets you easily see your most critical utilization issues and troubleshoot from the disk utilization violation back to the hosts You can also add columns to show applications and Business Entities if you want
Letrsquos take a look how you can use Disk Utilization Violations to quickly identify and drilldown to where your issue is
Select Disk Utilization Violations from the Assurance Menu
Sort the Utilization column descending to bring your heaviest utilization to the top
Here you see the utilization of each disk that exceeded the Disk Utilization threshold we
set earlier in our Global Policy In relation to each violation you see the disk array
hosts that access this disk the date and time the violation occurred the percentage of
utilization as well as IOPS and Throughput
Select the disk with the highest utilization
Now select the Volume Usage of Disk microview to get details on volume usage and
performance
Sort the Disk IOPS descending and select the top Volume Usage of Disk Here I see
the volume with the highest usage along with the disk throughput and percentage info by
volume and host
Exceeded Threshold
Host with highest IOPS
Switch Traffic OK
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
18 Insert Technical Report Title Here
Select Switch Port Performance microview I see that my load appears to be balanced
(distribution column) across the storage ports so thatrsquos most likely not a SAN or network
configuration issue
Since this disk did cause a utilization violation I can identify possible host candidates that can cause the high utilization on the disk OR I can see that the disk may have too many volumes carved from it and I may need to spread that load out across more disks
4 PERFORMANCE
OnCommand Insight provides performance information from end-to-end This is different from the violations we discussed above in that it provides pure performance information of Volumes Internal Volumes (FlexVols) Storage Pools (Aggregates) and Disks Performance also provides performance of VMs Switches ESX Hyper-V VMDK and Datastores From here we can use this to troubleshoot congestion contention and bottlenecks identify heavily used storage pools volumes and disks SAN ports that are heavily used possible candidates for physical to virtual host virtualization and optimizing your storage and tiering
From the Navigation Pane or the dropdown menu expand the Performance menu
Here we see that OnCommand Insight collects and shows you storage performance
switch performance data store performance VM performance and even application
performance as it relates to storage performance
41 STORAGE PERFORMANCE
From the Performance menu select Storage Performance
Sort Top Volume IOPS column descending (if not already done)
Select Array called Sym-0000500743hellip in the main view (should be near the top)
Use Scroll Bars to see more performance info in all windows
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
19 Insert Technical Report Title Here
Use the Horizontal slide bar in the main view to see the volume response times and
IOPS as well as disk utilization and IOPS columns (far right) Notice there is no Internal
Volume performance information because the EMC Symmetrix does not contain any
Wersquoll look at a NetApp Array shortly to see Internal Volume (FlexVol) performance
Now in the Main View notice the column called Volume Partial RW This indicates
there are volumes on that array that are misaligned (wersquoll see more detail later)
Select the microviews at the bottom to show you details of disk performance and
volume performance Which microviews did you open (hint view below) Notice this
provides detailed throughput IOPS response times at the volume and disk level
Close the Disk Performance microview
Select the column customize icon in the header of the Volume Performance
microview
Use the vertical scroll bar to view all the columns that can be added or removed from
this report
Select Partial RW and the Storage columns and click OK This adds columns to the
Volume Performance report showing you each Volume on each array that is misaligned
(Note you can get a complete list of all your misaligned volumes by selecting all the
arrays in the main view above) Additionally you can group the volumes by storage to
make it easier to view all the misaligned volumes across your entire enterprise by array
(See figure below)
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
20 Insert Technical Report Title Here
Now Select the Symmetrix-FAST array and toggle the Chart microview on Here you
see OnCommand Insight showing EMC FAST auto-tiering You can also see the
NetApp Hybrid Aggregates by selecting the We can also chart this performance over
time
Partial ReadWrite indicated volume mis-alignment
Notice FAST-T Volume performance
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
21 Insert Technical Report Title Here
OnCommand Insight provides complete end to end performance views through virtualize storage Letrsquos take a look
Select the storage array called Virtualizer from the main view (Note This is a V-Series
machine but OCI provides the same visibility through other virtualizers as well)
Toggle on the Virtual Machine Performance microview
Toggle the Backend Volume Performance and Datastore Performance microview on
Use the slide bars at the bottom of the microviews to see more of the performance
columns in each view (Whoopshellip there is a red mark in the Latency column We will
analyze this later)
I know this is a bit busy but I wanted to demo to you that you can have deep performance visibility from the VM through the Datastore to the frontend virtualizer array through the backend volumes You can also drill down performance to the disks on that backend array and you can also select the Switch Port Performance microview to visualize the performance on the SAN so you can see very deep performance information from end to end We will ldquoanalyzerdquo these performances from end to end a bit later
42 SAN PERFORMANCE (SWITCH PORT PERFORMANCE)
Switch performance is the actual performance on the SAN at the switch
Select Switch Port Performance from the Performance Menu OnCommand Insight
knows whether the switches are connected to Arrays and Hosts so it shows you the
performance in context to the host or Array instead of the switch prospective
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
22 Insert Technical Report Title Here
Using the dropdown at the top of the table group the main view by ldquoConnected Device
Type then Namerdquo
Using the dropdown next to it set the timeframe to ldquoLast Weekrdquo and hit the refresh
icon to the right
Sort the Distribution column Descending (arrow pointing down)
Expand Hosts
Expand hosts ny_ora1 and exchange_ny1
If you look at the Value and Distribution columns you can see how HBArsquos are balanced
on these hosts On host ny_ora1 you see three HBAs that are balanced very well But
looking at host exchange_ny1 you see that one of your 2 HBAs has over 95 of the
traffic load on it while the other one has less than 5 of traffic So you can see an
imbalance of the load across your HBArsquos Perhaps the multipath software is not
configured correctly doesnt work or installed but not turned on However also look at
the fact that one HBA is 4GB the other is 2 GB The admins may have purposely
configured this host traffic to compensate for the slower HBAhellip
Select the Port Performance Distribution and Port Performance microviews to view
this analysis over time
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
23 Insert Technical Report Title Here
Select exchange_ny1 from the main menu above View the performance and
distribution of both HBArsquos If you select one or the other the performance and
distribution charts change to show you the details of what yoursquove selected This function
is the same throughout OnCommand Insight GUI
43 CANDIDATES FOR HOST VIRTUALIZATION BASED ON ACTUAL PERFORMANCE
This performance from the host SAN prospective shows you which are the busiest servers and which are candidates for virtualization
Toggle off the 2 performance charts
Collapse the expanded columns using the Collapse All Groups ICON on top of the
main view
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
24 Insert Technical Report Title Here
Expand Hosts again Notice your busiest servers are at the top of the list
Use the vertical slide bar to go to the bottom of the host list to see your least busy
hosts As you see here there are many hosts down near the bottom that have hardly
any traffic Note If you have virtualization project going on you can very quickly isolate
which physical hosts dont have much traffic to the applications and conduct your due
diligence on those applications for possible relocation to VM environment
You can also use the same information here to choose which ESX hosts are good candidates to move those applications based on how much traffic they are generating on the SAN
44 STORAGE ARRAY PERFORMANCE BASED ON SAN TRAFFIC
We use the same logic and methods optimizing the traffic across the storage ports of the arrays
Collapse the Hosts section and expand the Storage section
You can see the busiest arrays at the top
Expand storage array XP 1024 to see the traffic flow through the storage ports In this
case over 80 of the traffic is going across two of the six ports on the storage arrays
Not very well-balanced You can rebalance this traffic OR using this information you
can select a lesser used storage port to provision your NEXT Tier 1 application too This
helps you intelligently provision and optimize your environment using real traffic analysis
45 STORAGE TIERING AND ANALYSIS
Similarly like you did with the hosts you can see the storage arrays that are NOT so busy
Scroll to the bottom of the Storage Array list
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
25 Insert Technical Report Title Here
There are several expensive tier 1 Symmetrix and other arrays at the bottom of this list
that have very little traffic accessing the arrays These arrays may have lots of data on
them but nobodys using them Armed with this information you could take a look at the
application data on these expensive tier 1 arrays and move the applications to less
expensive tier 2 or tier 3 arrays OR archive it Then you can decommission or
repurpose these expensive arrays (LOTS of ROI potential here)
46 SWITCH ISL TRAFFIC VISIBILITY AND OPTIMIZATION
OnCommand Insight shows you only the ISLrsquos (Inter Switch Links) under the switches category
Collapse the Storage category in the main view and expand the Switch category
Expand Switch 78 and hcis300
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
26 Insert Technical Report Title Here
As we saw with Hosts and Arrays we can see exactly how well balanced the traffic is across the iSLrsquos Switch hcis300 is well balanced but on Switch 78 we see that 90 of the traffic is going across one switch link and only 9 of the traffic across the other If this is a trunk this is severely out of balance
We also see which are the busiest and least busy switches This allows us to balance out (optimize) our environment as well as weed out the least busy switches
47 VIRTUAL MACHINE AND DATA STORE PERFORMANCE
TROUBLESHOOTING END TO END PERFORMANCE ISSUES USING ldquoANALYZE
PERFORMANCErdquo
Letrsquos put all this performance information to good use
USE CASE I may have gotten a call from a user that was complaining that the application on VM 70 is running slow or I may have received the alert from the threshold being breached Letrsquos troubleshoot the problem
Select Virtual Machine Performance
Select Custom from the ldquoTimeframerdquo dropdown menu next to the grouping menu Enter
the dates January 1 2012 through now
Then hit the Green Recycle button next to the dropdown
Sort the VM Disk Top Latency column descending to get the longest latency at the top
Here we are seeing that in fact VM-70 does not appear to have any performance issues but we do see very high CPU Memory and Data Store Latency on VM-60 and VM-61
Look at column 2 The common factor between Vm-70 (the user complaint) and VM-60 is DS-30
Open the Datastore Performance microview to validate the high latency time
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
27 Insert Technical Report Title Here
Right click on VM-60 and select Analyze Performance
This opens an analysis of everything associated with VM-60 and DS30
See the tabs across the top of the window Each of these tabs provides in-depth visibility into performance within each category
Selecting the Disk tab I see that although I have a few high ldquotoprdquo utilization overall
utilization and IOPS are relatively low so that can rule out a hot disk issue
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
28 Insert Technical Report Title Here
Select Volumes tab and Internal Volumes tab I see there are some relatively high Top
Response times but still very low IOPS which tells me other factors are affecting
response time and the slowness of the application on VM-70
Select Backend Volumes we see the storage is virtualized and we can see the
performance on the backend volumes here Here I see some possible higher IOPS but
still no glaring issues in performance
To make sure I donrsquot have a SAN problem I select the Switch Performance tab It
shows an imbalance between the 2GB HBArsquos on ESX1 where VM60 and 70 are and a
potential optimization or outage issue but no gridlock
Select Hosts tab This tab shows me that host ESX 1 is the same host that holds VM-
60 and VM-70 VM-60 appears to be causing very high CPU and Memory usage which
is causing contention with time sharing during disk access thus creating a high Disk
Latency But the Disk IOPS are still very low
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
29 Insert Technical Report Title Here
Deduce that VM-60 is probably not sized right for the application that is driving it hard This is probably whatrsquos causing the disk latency issue So chances of a disk issue are slim
48 VM PERFORMANCE
VM Performance helps you troubleshoot the same scenarios Here you can understand whats going on the whole environment
Select Virtual Machine Performance
Sort column Top Disk Latency in descending order so the largest latency rises to the
top In this case VM 61 here is chewing up a lot of memory and a lot of CPU time but
using Low Disk IOPS The VM appears to be causing the latency issues
Select VM-61 You can open a micro view and see the VMDK performance as well
Add chart microview
You can also a break it out by volume performance and data store performance thus
giving you a more holistic picture of the environment and help you troubleshoot to
resolution
The takeaway is you can troubleshoot performance issues from many different angles and go in many different directions to quickly narrow down the problem
49 APPLICATION AND HOST PERFORMANCE
You can add to any of these performance views your applications and hosts to help you understand how your performances affecting your applications That is important to the business customer You can drill down and understand where the performance issue is by showing you visibility from the application all the way to the disks
Scroll down to ESX1
Use the horizontal slide bars in the main and microviews to see performance info
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
30 Insert Technical Report Title Here
OnCommand Insight shows you performance from the host prospective all the way back to the storage but remember it does not have agents on the host so it cannot show you the details of the performance on the host itself
Review questions
What is the value of Analyze Performance
What are the areas we can view performance matrix under Analyze Performance
5 PLANNING TOOLS
51 TASK AND ACTION PLANNING AND VALIDATION
OnCommand Insight as 2 plan tools to help you plan validate and monitor changes in your environment One is a change management tool and the other is a migration tool for switches only
The change management tool (or What-IF) helps you to create tasks and actions within those tasks using a wizard It helps you logically configure changes that you need to make test and validate those changes before you make them and monitor the progress of changes as you make them This significantly reduces your risk when making changes because you can pretest them before you make any actual changes in your environment
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any active tasks Use it in the planning validating and execution monitoring of your change management
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
31 Insert Technical Report Title Here
Select Planning Menu
Select Plans to access the tool
Select the task ID oadmin 01082007 ndash Replace HBA Clearcase1
Notice the Actions list for the task These are generated by you to help you logically and
accurately list out the tasks
To add more actions simply right click in the action area and select ldquoAdd Actionrdquo
In the new action window scroll down and select the action you want to perform You
can add description and other parameters then select OK
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX
32 Insert Technical Report Title Here
Then you can pre-validate the actions to ensure you know the results of each action
BEFORE you actually perform the task To do this right click the task and select
Validate task
As you see below OnCommand Insight validates each action against the current
configuration in your environment to validate what has been completed correctly
(GREEN CHECKMARK) what is not completed (BLANK BOX) and what is not
completed correctly (RED X)
33 Insert Technical Report Title Here
When you build the action list OnCommand Insight automatically compares your
planned changes to your existing environment and anticipates any future violations that
could occur if you made these changes without correcting planned actions OR
violations that already exist in your environment
Once you complete creating your list of action items you can right click and validate the actions as many times as you want until completed OnCommand Insight validates every one of these actions It will show you whether the actions are complete correctly done wrong or not completed at all It gives you a preview into potential issues before you make the changes thus lowering your risk
52 SWITCH MIGRATION TOOL
The migration tool provides you with instantaneous visibility into all of the environment and business entities that will be affected by a migration to new or updated switches Say you want to just update the firmware on a switch What-ifhellip it goes down in the middle of the upgrade What does it affect in your environment Knowing this ahead of time can reduce your risk by giving you the complete picture of whom and what will be affected by the interruption
The Migration tool allows you to tell OnCommand Insight which switches that you want to upgrade or replace Because OnCommand Insight knows all hosts storage arrays volumes business units applications that are affected by this change it can provide you with the current violations as well future violations that will occur when the switches pulled out This enables you to validate the total impact of the changes you want to make BEFORE you make them so you can reduce your risk by fixing issues before they occur
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
Under Planning menu select Migrations This shows you the migration tasks already
created and the existing impact on your business entities of existing proposed changes
here
To add a new task right click on the task area and select Add Task
34 Insert Technical Report Title Here
Complete task details above and click Next to select Switch (s) to migrate
Select the switches to be updated or replaced and click Finish
Select the new task in the main screen and use the microviews to provide you with
affected paths impact and quality assurance views
35 Insert Technical Report Title Here
Using this information you can speed up the time for you to migrate switches because it cuts the due diligence time and lowers your risk because you know the impacts before you take any actions
NOTE Remember OnCommand Insight is a READ ONLY tool so it does not perform any migration tasks Use it in the planning and execution monitoring of your migration
6 DATA WAREHOUSE
61 INTRODUCTION AND OVERVIEW
Letrsquos introduce you to the data warehouse Wersquoll talk about the Datamarts and navigation and then wersquoll go into the reports and wersquoll finish by showing you how to create Ad-hoc reports using Query Studio
The data warehouse is made up of several Datamarts Datamarts are sets of data that relate to each other
Open a browser and go to httplocalhost8080reporting
Log on using adminadmin123
If you receive this page uncheck the ldquoshow this pagehelliprdquo and select My Home
36 Insert Technical Report Title Here
Data warehouse (DWH) Home Page Public Folders
The data warehouse has several built-in Datamarts thoroughout the data warehouse (DWH) Above you see the 3 primary Datamarts called Chargeback datamart Inventory Datamart and Storage Efficiency Datamart Additionally we have two folders which contain other Datamarts for Capacity and Performance
Select the Capacity 63 folder
As you can see there are other Datamarts that are Capacity related Datamarts including Internal Volume Volume Storage and Storage Pool and VM Capacity Datamarts provide you with easy to use data elements related to those specific catagories making it easier for you to use the existing reports but more importantly help you create your own custom reports using drag and drop technology wersquoll show later in this lab
Select the Storage Capacity Datamart
DataMart and folders
Public Folders
37 Insert Technical Report Title Here
There are 4 folders located within EVERY Datamart Most built in reports are in the Reports folder Any custom reports you create you MUST save them in the Customer Report or Customer Dashboard folders in order to save them during Upgrades
Select Dashboards (notice the BREADCRUMBS to help you navigate)
Which dashboards are located in the folder
62 PLAN - CAPACITY FORECAST DASHBOARD
The data warehouse has over 200 built-in dashboards and reports Letrsquos take a look at a few
The capacity forecast dashboard provides a history of how storage is been used as well as trends and forecast out to the future It shows by data center and by tier
Select that Capacity Dashboard This may take a bit of time to paint so be patient
The capacity forecast dashboard provides you with trending and forecast of your
capacity across your entire environment NOTE your data in the picture may vary
depending on the demo db you are using and the date (because itrsquos a trending chart)
While we are at it letrsquos also stage the tiering dashboard in a new window by holding the
shift key and selecting the Tiering Dashboard so we can discuss it is well in a few
minutes
Folders
Dashboard
Reports
Navigation
Bread crumbs
38 Insert Technical Report Title Here
When it first opens you see in the upper left the Capacity Consumption Forecast report by Datacenter and Tiers The initial view shows how much storage is left in each datacenter by tier before it reaches 80 (adjustable by user) of capacity The graph on the right depicts the usage trending and forecasting over time The ldquoReset Selectionrdquo button resets the graphic to show storage trending across the entire enterprise
Select TokyoGold-Fast block on the matrix Notice the graph at the right changes to
reflect the storage consumption trending and forecasting for that tier at that datacenter
39 Insert Technical Report Title Here
Reset the Storage Capacity Trend chart by clicking the Resets Selection next to the
Matrix chart The chart on the right will show the trending and forecasting for the entire
enterprise
Scroll down the dashboard to view the list of reports on the right side Each of the
dashboards has a list of related reports on the lower right hand side You can select
from any number of different reports to provide the detailed information that you need
The dashboard also contains some dial graphics showing you storage consumption and capacity in your enterprise and each datacenter
Continuing down the left side of the Dashboard these charts show you business level
storage consumption by business entities Here we can drill down to see usage by
tenant Line of Business Business Unit and Projects
40 Insert Technical Report Title Here
Right click in this graphic and you can drill down to view storage usage by line of
business drill again to business unit and by project
As you can see you get really detailed information on consumption by your business entities from Tenant LOB Business Unit Project and Application in a very quick form
41 Insert Technical Report Title Here
63 TIER DASHBOARD
Letrsquos take a look at the tiers dashboard that we opened up a few minutes ago by selecting it from the tabs at the bottom of your Windows screen
Note Your data may vary depending on the database used for this demo
This dashboard gives us a different perspective on how storage is growing and how is being used As you see it looks like gold tier has remained relatively stable over the past few months while gold-fast storage which is more expensive storage has grown considerably over the past couple months This tells you how your tiering initiatives are progressing Bronze which is hardly grown at all could be an indication that wersquore spending too much money on storage You might want to review your storage usage using OnCommand Insight to see how the storage is being consumed and by whom
Scroll down Lets look a little closer OnCommand Insight shows storage usage by
business units applications and by tier This enables you to understand how storage is
being used You can also by data center tier and business entity
As we did in the last report you can right click and drill down and look at consumption by
tenant line of business business unit project and application You can understand how
your data is being consumed at multiple levels and aspects
Select the ldquoReturnrdquo ICON at the top right of the Tier Dashboard to return to the folder
42 Insert Technical Report Title Here
There is a new Storage Tier Report located in the Storage and Storage Pool Data Mart Letrsquos take a quick look at it
Use the BreadCrumbs to navigate back to Capacity 63 folder
Then select Storage and Storage Pool Capacity DataMart and Reports folder
Next select the Storage Capacity By Tier Report to view the report below This report shows your capacity by tier and how it trends over time It also provides a great detail and summary report at the bottom showing each Array tiers and how much capacity is used and the percentage (lots of information on a single report)
64 ACCOUNTABILITY AND COST AWARENESS
The standard data warehouse chargeback reports are more about accountability than all chargeback Wersquoll show you this now Wersquoll also show you how to create your own powerful ldquocustomrdquo chargebackshowback reports using Business Insight Advanced later in this lab
Select Public folders in the breadcrumbs on top left of the Data warehouse window
Select Chargeback Datamart
In the chargeback Datamart and select reports folder to access various reports that
show capacity and accountability
43 Insert Technical Report Title Here
Select Capacity Accountability by Business Entity and Service Level Detail Here
you have the option to customize this report to your needs by selecting service levels
resource types applications and host and storage names You also have the option of
selecting the business entity by using the dropdown to select any or all of the business
entities and projects
Select all in each category to give you a good representation of the in-depth reporting
Then click finish
Chargeback DM
44 Insert Technical Report Title Here
The report provides a very detailed version of the capacity utilization by business entity application and the host its running on the storage array the volume the actual provisioned and used storage This report is grouped by business unit as well as applications this provides you with a good representation of whos using what storage
Note the scroll bar for scrolling on page 1 and they you can also use the Page UPPage Down links at the bottom to go to page 2 etchellip
Select the Return Icon in the upper right to return to the folder of reports
45 Insert Technical Report Title Here
65 UNCHARGED STORAGE
You can also generate reports that help you understanding what storage is NOT being accounted for
Select ldquoCapacity Accountability by Uncharged Capacity per Internal Volumerdquo This
provides you with a complete listing by array and volumes and how much storage is not
being charged or accounted for
You get FULL accountability of which storage is being accounted and which storage is NOT being accounted for across the entire enterprise regardless of storage vendor
66 IOPS VS CAPACITY REPORTING IN THE DATA WAREHOUSE
Lets look at performance versus capacity and orphaned storage by last access This adds another dimension into how your storage is being used
Open the Performance Datamart (hint use breadcrumbs to select Public Folders and
then select the Performance Datamart)
Select the Internal Volume Daily Performance folder This provides really good
pictorial view of how your storage being used
46 Insert Technical Report Title Here
Select reports and select Allocated used internal volume Count by IOPS Ranges
This provides capacity versus IOPS report which is very interesting
Select Last Year time period
Select All Storage models and Tiers and Click Finish
Select all arrays and all tiers to give you a full view of how your storage is being used
(or not being usedhellip)
Looking at the results Remember this is storage accessed over the past year The resulting report shows you all the storage that has (or has not) been accessed over the past year
47 Insert Technical Report Title Here
As you see from the first bar there are over 7300 volumes that have not been accessed over in the past year If we look at it in terms of size over 34 PB has had zero accessed the past year Note this actually real customer but the names have been sanitized
You can see how impactful this is There is over 34 PB of storage that has had zero use for a year This information enables you to start making business decisions on the storage how to better understand how itrsquos being used so you can reclaim and re-purpose some of that storage (Talk about ROI)
67 DIGGING INTO THE DETAILS
These charts are really nice but you need the detail to effectively work on identification and recovery OK lets go look at the underlying details
Go back to Volume Daily Performance 63 folder and drill down to Reports (hint itrsquos
in the Performance Datamart )
Select the Array Performance report This gives you a complete breakdown of the
performance for all storage from the arrays all the way down to the volumes
Select one year and set the IOPS parameter you want to filter on (I usually start at
default)
This report starts with the Orphan Summary
Select Page down to view the Storage array summary
Over 7300 Vols Over 34 PB
Storage access for a year
48 Insert Technical Report Title Here
As you see this is pretty high-level It shows the total amount of raw and allocated
capacity in each Storage device vs the total IOPS and the max IOPS actually used over
the past year This tells a very compelling story but itrsquos still high level
Page down a few pages and reach the bottom of this section You see a Glossary
terms explaining the column headings
49 Insert Technical Report Title Here
Now continue to page down to the Host tables This shows you the hostname the
raw and allocated capacity by host and the IOPS accessed over the past year This is
more detail than the Storage tables above
Page down past the host tables and you look at the orphaned volumes prospective
Here is a great deal of details that you can use These are all the volumes that have not
been accessed in a full year It shows you the Array name volume capacities
hostname as well as the applications and tiers that have not been accessed in the last
year
Page down to the ldquoVolume by IOPS tables (may be several pages down) This shows
you the storage array volume capacity host application tier Max and total amount of
IOPS So we can say its a pretty well-rounded report that shows you actual usage (or
lack thereof) so you can go reclaim the storage that is not used
50 Insert Technical Report Title Here
68 VM CAPACITY REPORTING
There are several different reports in the VM capacity Datamart
Navigate to the VM Capacity 63 Datamart
As you see we have several reports built-in here already
Select VM Capacity 63 and then navigate into the Reports folder
Select VM Capacity Summary
51 Insert Technical Report Title Here
Select all so we see the VM Capacity across the entire Enterprise (spanning multiple
vCenters
The results show all the VMs their capacity the data store the actual capacity the VM names in the provisioned storage and commit ratio of each VM across your entire environment NOTE I paged down to the bottom so you can see the total storage and commitment across your whole enterprise plus a glossary of terms
Select the ldquoreturn buttonrdquo in the upper right corner of the report (looks like a left turn
arrow)
52 Insert Technical Report Title Here
Next select Inactive VMs report to show you VMs that have not been accessed in a
defined period of time (default 60 days)
Set this time threshold and click Finish
This is an excellent report showing you which VMs are powered off and how long they have been powered off as well as how much capacity each one of those is holding that nobody else can use It gives you all the details including the datacenter VM OS ESX host cluster VMDK and how long itrsquos been powered off Armed with this information you can go recover these VMs cover can reclaim storage
53 Insert Technical Report Title Here
7 CREATE AD-HOC REPORT
Letrsquos show you how easy it is to create custom reports in the data warehouse
71 HOW TO CREATE A CUSTOM SHOWBACKCHARGEBACK REPORT USING
BUSINESS INSIGHT ADVANCED
Below is a great example of the custom Chargeback or Showback Report that you will create It shows usage by Business Entity and Application including variable cost of VM based on configuration fixed Overhead and Storage usage
STEPS TO CREATE THIS REPORT
Watch a video on how to create this report Note You need a user name and password for this community To obtain them click the Become a Member link
OnCommand Insight Reporting Portal is accessed through httpltreporting-
servergt8080reporting
Enter User name and Password credentials
From the Welcome page select My home
From the Launch menu (at the top right corner of the OnCommand Insight Reporting
portal) select Business Insight Advanced
From the list of all packages that appears click on the Capacity ltversiongt folder
and then click on VM Capacity ltversiongt
Create a new report by selecting New from the dropdown in the upper left corner or
Create New if you are on the Business Insight Advanced landing page
54 Insert Technical Report Title Here
From the pre-defined report layouts New pop-up choose List and click OK
In the lower right pane select the Source tab and expand Advanced Data Mart
from the VM Capacity package
From the Advanced Data Mart expand Business Entity Hierarchy and Business
Entity and drag Tenant and place it on the report work area
Collapse Advanced Data Mart and expand Simple Data Mart
From Simple Data Mart drag Application and place it on the report work area to
the right of the Tenant column (TIP Make sure you place it on the blinking gray
BAR on the right of the previous column or it will give you an error)
Now we are going to drag multiple columns to the palate to save time building the report
We will be reporting on the total of processors (cores) and the memory that is
configured for each VM So letrsquos grab the following elements from the VM
Dimension under the Advanced Data Mart
From Advanced Data Mart expand VM Dimension
55 Insert Technical Report Title Here
Select the next columns IN THE FOLLOWING ORDER
From Advanced Data MartgtVM Dimension hold the control key and select the
following columns (in order)
o VM Name
o Processors
o Memory
Click and Drag VM Name and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
Now letrsquos bring Capacity information onto the report
From Simple Data Mart hold the control key and select the following columns (in
order)
o Tier
o Tier Cost
o Provisioned Capacity (GB)
Click and Drag Tier column and place it on the report work area to the right of the
Application column NOTE All the columns should follow in the order you selected
them similar to screenshot below (your data will differ but columns will be the same)
56 Insert Technical Report Title Here
To create a summary of cost per GB hold the control key and select Tier Cost
and Provisioned Capacity (GB)
Then Right Click the Provisioned Capacity Column and select Calculate and
select the multiplication calculation
Business Insight Advanced has created a new column for you completed the calculations and put it in the report
Next letrsquos format and re-title the column name
Right click on the new column head and select Show Properties
57 Insert Technical Report Title Here
In the lower Right corner scroll down to the bottom of the properties box and
select the ellipsis on the Data item name box Change the name to Storage Cost
and click OK
Note the column heading is now Storage Cost
Now select one of the numeric values in that column and select Data Format
ellipsis from the properties box in the lower right corner
From the Data Format dialog box select currency from the Format type dropdown
As you see from the Properties dialog box there are lots of options you can set to
format the currency numbers in this column The default is USD so letrsquos just click OK
to set the default You will see the column reformat to USD
58 Insert Technical Report Title Here
Here is our current report Letrsquos filter out storage that is NOT being charged
Select any BLANK cell in the Tier Cost Column and click on the filter ICON in the
top toolbar
Select Exclude Null
Here is our current report Notice all the cells that had NO cost associated with those tiers are deleted leaving you with only the storage that has charges associated with it (TIP in another report you can actually reverse the logic and show only storage that is NOT being charged as wellhellip)
You can also format the Tier Cost column with USD currency as well if you want
59 Insert Technical Report Title Here
OK that was easy but not complete Letrsquos add other cost factors into your chargeback report for cost of VM Service Levels by configuration and fixed overhead costs used by each application
ADDING VARIABLE COSTS PER VM TO YOUR CHARGEBACK REPORT
Letrsquos say the customer wants to charge per VM based on the of CPUs and Memory itrsquos configured with To do that we need to first create a VM Service Level comprised of the of CPUs and Memory configured for each VM Then allocate cost per Service Level
To create a VM Service Level we are going to drop in a small conditional expression to build the Service Levels per VM This is an easy example of the flexibility of Business Insight Advanced in creating reports (DONrsquoT panic you can skip the conditional expression and just put a fixed cost on each VM if you want See the Overhead example later onhellip but humor me here in this lab)
Select the Tier Column to place where we want to insert the new columns
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column VM Service Level and
select Other Expression and click OK
60 Insert Technical Report Title Here
In the Data Item Expression dialog box copy and paste the following VM Service
Level conditional expression into the Expression Definition box and select OK
(note if you are remoted into the OnCommand Insight server you may have to
create a text document on the OnCommand Server desktop to cut and paste
this into prior to pasting it into the Expression box)
Below is an example of the conditional expression that gives you the if-else condition for VM Service Level
IF([Processors] = 2 AND [Memory] lt 2049)
THEN (Bronze)
ELSE (IF([Processors] = 2 AND [Memory] lt 4097)
THEN (Bronze_Platinum)
ELSE IF([Processors] = 4 AND [Memory] lt 8193)
THEN (Silver)
ELSE IF([Processors] = 4 AND [Memory] gt 8193)
THEN (Silver_Platinum)
ELSE IF([Processors] = 6 AND [Memory] gt 8191)
THEN (Gold)
ELSE IF([Processors] = 8 AND [Memory] gt 16383)
THEN (Gold_Platinum)
ELSE (tbd))
61 Insert Technical Report Title Here
Business Insight Advanced will validate the conditional expression (nice to know if
you got it right) and create the column called VM Service Level and populate it
based on the query (If you get an error your Conditional expression probably has
a syntax or other error)
You will see a new column added called VM Service Level with the various Service
Levels for each VM based on the of CPUrsquos and memory each has (At this point
there may be duplicates in the list but not to worry We are not finished formatting or
grouping the report)
Next letrsquos add a column that calculates the cost per VM based on Service levels we just established
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON
In the Create Calculation Dialog box name the column Cost Per VM and select
Other Expression and click OK
In the Data Item Expression dialog box cut and paste the conditional expression for Cost of VM (below) in the Expression Definition box and select OK
62 Insert Technical Report Title Here
Example of conditional expression for Cost per VM
IF([VM Service Level] = Bronze) THEN (10) ELSE(IF([VM Service Level] = Bronze_Platinum) THEN (15) ELSE IF([VM Service Level] = Silver) THEN (20) ELSE IF([VM Service Level] = Silver_Platinum) THEN (25) ELSE IF([VM Service Level] = Gold) THEN (40) ELSE IF([VM Service Level] = Gold_Platinum) THEN (55) ELSE (30))
You will see a new column added called Cost Per VM with variable costs for each
VM based on the Service Level
Next format the data in the Cost per VM column to USD currency as you did above
63 Insert Technical Report Title Here
ADDING FIXED OVERHEAD COSTS TO YOUR CHARGEBACK REPORT
Letrsquos say the customer has determined that the total cost for Overhead (including items like heatAC Floor space Power Rent Operations personnel helpdesk etc) is $24 Per VM Letrsquos create a column called Cost of Overhead and apply this fixed cost (note you can do this for any fixed costs rather than use SQL as well)
Select Toolbox tab at lower right corner and double click the Query Calculation
ICON above
In the Create Calculation Dialog box name the column Cost of Overhead and
select Other Expression and click OK
In the Data Item Expression dialog box enter a cost of 24 in the Expression
Definition box and select OK
You will see a new column added called Cost of Overhead with 24 for each VM
(Note at this point there may be duplicates in the list but not to worry We are not
finished formatting or grouping the report)
64 Insert Technical Report Title Here
Next format the data in the Cost of Overhead column to USD currency as you did
above Then drag the column header and drop it to the right of the Storage Cost
column as shown below
Subtotaling naming and saving the report
Now that we have a cost per VM overhead and the cost per storage usage by Tenant Application and VM letrsquos sum the total costs and finish formatting the report by Tenant and Application
Hold the control key down and select a numeric cell in the Cost per VM Storage
Costs and Cost of Overhead columns Right click one of the numeric cells and
select Calculate and the add function for the three columns
This will create a new column called ldquoCost per VM+ Storage Costs + Cost of Overheadrdquo and calculate each row
Now format the column for USD currency and retitle the column to ldquoTotal Cost of
Servicesrdquo
Name the report ldquoTotal Storage VM and Overhead Cost by Tenant and
Application Chargeback (Showback) ldquoby double clicking the title area
65 Insert Technical Report Title Here
Now save it to the Customer Report folder using the same name
FORMATTING AND GROUPING THE REPORT BY APPLICATION AND TENANT
We are not done yet Now we need to format the report by grouping subtotaling and totaling by tenant and application
Hold the control key down and select ldquoCost per VM Provisioned Capacity
Storage Costs Cost of Overhead and Total Cost of Servicerdquo columns
Select the Total ICON from the Summary dropdown ICON
66 Insert Technical Report Title Here
If you Page down to the bottom of the report you will see total columns Wersquoll clean
up the summary Rows in a minute
Letrsquos Group the report by Tenant and Application
Hold the control key down and select Tenant and Application columns
Select the Grouping ICON from the top toolbar
CLEANING UP THE REPORT AND RUNNING IT
To Clean up the report right click and delete the Summary ROWS (not columns)
67 Insert Technical Report Title Here
Then Go to the bottom of the report hold control key and select both Summary
Rows and right click and delete them (Leave the TOTAL rows)
Save the report
Now letrsquos run the report to see how it looks
Select the Run ICON from the toolbar and run the report as HTML (note the formats
you can run it in if you wanthellip)
The report will show like this in its final format Irsquove paged down in the report below to show you subtotals and you can page to the bottom and see the totals by Company and Total of all resources charged
68 Insert Technical Report Title Here
These reports are extremely flexible to do what you need Notice the drill down
link in the Tenant column (pictured above in the red circle) If you click on the
LINK you will drill down from Tenant to Line of Business then to Business Unit etc
If you Right Click on the link you can drill up as well
You can now schedule this report to run and distribute in various formats like any other OnCommand Insight Data Warehouse report
Remember now that you have created this report every time you run it will provide the latest in usability information You can automate this report by scheduling it to run and email to recipients etc Lots of flexibilityhellip
69 Insert Technical Report Title Here
72 OTHER OPTIONS FOR AD-HOC REPORTS USING QUERY STUDIO
You can also create simple Ad-hoc reports by using Query Studio A very simple example is shown here
Log onto the data warehouse using Adminadmin123 (must be logged on as Admin to use Query Studio)
From Public folders select the Chargeback Datamart
Select Launch menu in upper right corner of the view and Select Query Studio
The Datamart is set up in ldquosimple Datamartrdquo and ldquoadvanced Datamartrdquo Simple DM contains the elements that most users use for reports The Advanced DM contains all the Facts and Dimensions for all the elements At this point wersquoll create this report using the simple DM to show you how easy it is
70 Insert Technical Report Title Here
Expand the Simple DM and do the following
Click and drag the Business Unit to the pallet
Click and drag the Application element to the pallet You see the applications line up
with their proper Business Units
Click and drag Tier over to the Pallet to organize the storage usage by tier
Click and drag ldquoProvisioned raw by GBrdquo (You can select megabyte or terabytes as
well as gigabyte Irsquove selected GB because this is from a volume perspective and
application perspective
To calculate cost we need to add the ldquoTier costrdquo to the report
Click and drag the ldquoTier costrdquo element over place it between the Provisioned Raw and
the Tier column
71 Insert Technical Report Title Here
To filter out any storage without tier cost associated right click the heading of the Tier
Cost Column and select Filter (see below for reference)
o Select ldquoShow only followingrdquo (default)
o Select ldquoMissing valuesrdquo to expand it
o Select ldquoLeave out missing valuesrdquo
o Select OK
72 Insert Technical Report Title Here
See Results below
73 Insert Technical Report Title Here
Now letrsquos calculate the total cost of usage by GB per application
Hold the control key and highlight the ldquoProvisioned Capacityrdquo and the Tier cost
column until they show yellow
Select the green Calculation ICON at top of the edit icons above or right click on the
columns and select ldquoCalculaterdquo
74 Insert Technical Report Title Here
In the calculation window select multiplication and title the new column ldquoCost for
Storage and click Insert It creates a new column completes the calculation
To format the column right click on the new column and select Format Data
Select currency number of decimal points (usually 0) and select 1000rsquos separator
and click OK See how the column is formatted now
Double click the ldquoTitlerdquo on the report and re-title the report ldquoChargeback by Application
and BUrdquo
Now you donrsquot really need the tier cost column so you can delete it by right click on
the column and select Delete
This is good raw report but now letrsquos make it more useful
To group storage cost by Business Unit and Application
Select the Business Unit column (turns yellow) and select the Group by ICON on the top
line
You see the report reformats itself into cost by application by business unit
75 Insert Technical Report Title Here
Click ldquoSave Asrdquo icon and save the report to the public folders
Further Editing
You can go back and further edit the report like this
Letrsquos filter out all the NA and NAs in the BU and Application columns You have to do this one column at a time
Right click the BU column and select filter
In the filter dialog window select ldquoDo not show the following (NOT)rdquo from the
ldquoConditionrdquo dropdown
Select NA and click OK
Do the same for the Application column
Then Save the report again
76 Insert Technical Report Title Here
As you see you now have a better quality report
To exit Query Studio click the ldquoReturnrdquo icon at the top right corner of the screen
8 SCHEDULING REPORTS FOR DISTRIBUTION
OK now the report is saved letrsquos schedule it for running and distribution You can schedule all the built in reports in OnCommand
At the chargeback report we just created (you should be looking at where you saved
ithellip)
Select the schedule ICON on the right-hand side where you can set the properties
77 Insert Technical Report Title Here
As you see on the right you can schedule start and finish date
You can just send this report one time by clicking disable
Set the schedule options for weekly daily monthly etc Schedule to run this report and
send it to yourself at 3pm every Tuesday until Feb 1 2012 As you can see you can
schedule biweekly several times a week several times a day or you can also set up by
month year and even by trigger As you see lots of options
There are a lot of options for report format The default format is HTML but we can
override that default format by clicking and choosing from PDF Excel XML CSV etc
For delivery we can email it save it print the report to a specific printer You can send
the report via e-mail to users distribution lists etc We can include the link of the report
78 Insert Technical Report Title Here
or attached it directly to the email as well NOTE They must be able to log into
OnCommand DWH to access the link
When you are done click OK and the schedule is set
9 ENDING COMMENTS AND FEEDBACK
I hope this Lab was of value to you Your feedback is important to the quality of this lab document Please provide feedback to Dave Collins at davecnetappcom
NetApp provides no representations or warranties regarding the accuracy reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customerrsquos responsibility and depends on the customerrsquos ability to evaluate and integrate them into the customerrsquos operational environment This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document
copy 2012 NetApp Inc All rights reserved No portions of this document may be reproduced without prior written consent of NetApp Inc Specifications are subject to change without notice NetApp the NetApp logo Go further faster xxx and xxx are trademarks or registered trademarks of NetApp Inc in the United States andor other countries All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as suchnTR-XXX-XX