Session ID: BI-206
Performance tuning SAP NetWeaver BW 7.x
Dr. Bjarne Berg
2
Agenda
• Background• The Right Design
InfoCubes, DSOs and MultiProviders• Performance at All Levels
Query Design and BOBJ Tips Building and Using Aggregates Using MDX and OLAP Cache correctly Some Hardware settings BW- Accelerator
• What is New in BW 7.3• EarlyWatch Reports
3
User's reaction to poor performance
Is this your user community?How can you avoid this BEFORE it happens?
1,873 managers and business leaders were asked what factor was most important for their BI application.
Even in a recession, the key to BI success was functionality, ease of use, integration and Performance.
Price, standards, product reputation and architecture was of lesser importance.
4
The Key to BI Success
Performance is more important than price for the senior management.
Source: Business Research Center Survey
5
Agenda
• Background• The Right Design
InfoCubes, DSOs and MultiProviders• Performance at All Levels
Query Design and BOBJ Tips Building and Using Aggregates Using MDX and OLAP Cache correctly Some Hardware settings BW- Accelerator
• What is New in BW 7.3• EarlyWatch Reports
6
Who wins? - Functionality Vs. Performance
7
Number of BW storage objects
17
21
InfoCubes
Multicubes
ODS
An Example of a Real System
On the surface, this company appears to have a typical BW implementation with a set of InfoCubes, MultiProviders and DSOs
The system supported five different department and had been “live” for about one year.
8
Infocubes Characteristics Navigational attributes
Time char-acteristics
Hierarchies Dimensions Key figures Record length
Complexity
Available options 20 4 4 2 9 4 Moderate ModerateJobcost history cube 57 0 1 0 12 1 High HighOptions sold 23 19 4 1 11 14 Low LowMH AP line item cube 52 13 4 3 10 15 High HighMH AR line item cube 29 0 4 1 8 20 Low LowMH general ledger cube 30 2 5 6 8 9 Low LowCost cube 14 0 3 2 8 2 Low LowMH property master (LOT) cube 24 2 4 1 11 7 Low LowPurchasing item data 13 10 5 3 11 15 Low LowInventory cube 33 4 2 3 9 23 Low LowPS controlling MH 44 37 5 10 12 35 Moderate HighPS dates MH 25 19 5 4 8 17 Low ModeratePS controlling and dates cube 34 48 3 5 9 30 Low HighEarnest money estimation cube 40 24 5 7 13 16 Moderate HighMHSD overview 49 28 5 7 13 27 High HighSD commissions cube 51 22 4 4 10 21 High High
BW InfoCubes Observations
Within typical design parametersApproaching recommended regular configuration parametersNot frequently used design parameters
Ledger
A Quick View of the technical design
Just because an area is colored “red” does not mean it is wrong. However, an InfoProvider with many red areas is worth taking a close look at.
Key Question: How well do you know your own system?
9
In general, a common BW configuration contains a set of characteristics that are used for analysis purposes. The number of these characteristics varies from implementation to implementations.
Typically configurations ranges from 1 to 40y characteristics. The design depends largely on the requirements of the business, but there are some technical tradeoffs in load times when adding a very high number of characteristics. This is particularly true when these contains large text fields that are loaded at a high frequency and at a high number of records.
Infocubes Characteristics Navigational attributes
Time char-acteristics
Hierarchies Dimensions Key figures Record length
Complexity
Available options 20 4 4 2 9 4 Moderate ModerateJobcost history cube 57 0 1 0 12 1 High HighOptions sold 23 19 4 1 11 14 Low LowMH AP line item cube 52 13 4 3 10 15 High HighMH AR line item cube 29 0 4 1 8 20 Low LowMH general ledger cube 30 2 5 6 8 9 Low LowCost cube 14 0 3 2 8 2 Low LowMH property master (LOT) cube 24 2 4 1 11 7 Low LowPurchasing item data 13 10 5 3 11 15 Low LowInventory cube 33 4 2 3 9 23 Low LowPS controlling MH 44 37 5 10 12 35 Moderate HighPS dates MH 25 19 5 4 8 17 Low ModeratePS controlling and dates cube 34 48 3 5 9 30 Low HighEarnest money estimation cube 40 24 5 7 13 16 Moderate HighMHSD overview 49 28 5 7 13 27 High HighSD commissions cube 51 22 4 4 10 21 High High
BW InfoCubes Observations
Characteristics
10
Navigational attributesNavigational attributes lends flexibility to the way users can access data. Common configuration consists of 1 to 30y attributes. While technically not incorrect one should review InfoCubes that does not contain any navigational attributes. One should also review of any InfoCube that contained more than thirty. This may be an indicator that too much information is being placed in a single InfoCube.
HierarchiesHierarchies are ways for users to "drill-down" into the data and is commonly used for analysis purposes. Typical configurations tends to have one to eight. One should review any InfoCube with no hierarchy to validate this design with end user navigation, as well as question any design that contains a very high degree of these.
Navigational Attributes and Hierarchies
Developer quote: “The consultants told me I could not cram all this stuff into one InfoCube. I told them – you just watch me!!”
11
DimensionsBW allows for upto 13 dimensions to be created by a developer on a single InfoCube. However, using all of these on a first implementation severely limits any future extensions without major redesign of the system. One should review any InfoCubes that are approaching this limit.
Key figuresWhile no limitations are imposed by BW in terms of number of key figures (measures), typical implementations contains 1 to 20 of these. While a higher number of these may be required, there a are significant tradeoffs of load performance when a high number records are loaded (these are loaded with each transaction).
Dimensions and Key Figures
Lessons Learned: Don’t “paint” yourself in a corner on day one!!
12
Record lengthIn general, as the record length of an InfoSource increases, more data may be populated to the InfoCube.
Since an InfoCube might have more that one InfoSource, the length of each may be an indicator of the InfoCube growth size as the company rolls out BW to other divisions.
You should review of the design of InfoSources with large record lengths to determine the true need of including all the fields in the InfoCube vs. using alternate fields i.e. short text or codes, or removing them from the system..
Record Length
Lessons Learned: Don’t throw in the “kitchen sink” because it might come in handy one day…..
13
Data Loads
Infocubes Number ComplexityAvailable options 1 LowJobcost details multicube* 2 LowJobcost history cube 1 HighOptions sold 1 ModeratePending close multicube (new)* 2 LowMH AP line item cube N/A HighMH AR line item cube 1 ModerateMH general ledger cube 1 LowGeneral ledger line item ODS 2 HighCost cube 1 LowMH property master (LOT) cube 1 LowPurchasing item data 1 LowInventory cube 2 ModerateInventory multicube* 2 LowPS controlling MH 5 HighPS dates MH 1 ModeratePS controlling and dates cube 4 HighEarnest money estimation cube 1 ModerateMHSD overview 2 HighSD commissions cube 1 Moderate
BW Sources to InfoCubes/ODS and MultiCubesSources When fixing data load
problems, narrow the problem down quickly and focus on these areas.
Most data loads can be de-coupled from each other in the process chains.
This may reduce the time needed for activation
Lessons Learned: Spend your time and effort in a
focused manner!!
14
Indexes BW index diagnosticStatistics BW diagnostic of statistics that is recommended to be updatedAggregates User designed aggregates (performance & existence)
Ledger
Database Performance - RSA1--> Manage InfoCubes --> Performance
Database statistics are used by the database optimizer to route queries. Outdated statistics leads to performance degradation.
Outdated indexes can lead to very poor search performance in all queries where conditioning is used (i.e. mandatory prompts).
15
Line item dimensions are basically fields that are transaction oriented.
Once flagged as a ‘line item dimension’, the field is actually stored in the fact table and have no table joins.
The results is significant improvements to query speeds (10%-15%)
Use of Line Item Dimensions and Monitoring tools
Explore the use line item dimensions for fields that are frequently conditioned in queries. This model change can yield faster queries
Programs that can help you monitor the system design:
1. SAP_ANALYZE_ALL_INFOCUBES
2. ANALYZE_RSZ_TABLES3. SAP_INFOCUBE_DESIGNS
16
B-three Vs. Bitmap Indexes
When you flag a dimension as “high cardinality” SAP BI will use a b-tree index instead of a bit-map index.
This can be substantially slower if the high cardinality does not exist in the data in general (star-joins cannot be used with b-trees).
Info Cube Line Item dims
DIM 1 DIM 3 DIM 6 DIM 8
CBBL_CB02 0 HCBPD_CB06 0 HCBPR_CB11 0 HCBPR_CB18 0 HCBSV_CB01 0 HCBSV_CB02 0 H
Validate the high-cardinality of the data and reset the flag if needed – this will give a better index type and performance
17
Problem: To reduce data volume in each InfoCube, data is partitioned by Time period.
A query now have to search in all InfoProviders to find the data This is very slow.
Solution: We can add “hints” to guide the query execution. In the RRKMULTIPROVHINT table, you can specify one or several characteristics for each MultiProvider which are then used to partition the MultiProvider into BasicCubes.
If a query has restrictions on this characteristic, the OLAP processor is already checked to see which part cubes can return data for the query. The data manager can then completely ignore the remaining cubes.
An entry in RRKMULTIPROVHINT only makes sense if a few attributes of this characteristic (that is, only a few data slices) are affected in the majority of,
or the most important, queries (SAP Notes: 911939. See also: 954889 and 1156681).
MultiProviders and Hints
2002 2003 2004 2005 2006 2007 2008
18
MultiProviders and Parallel Processing
• To avoid an overflow of the memory, parallel processing is cancelled as soon as the collected result contains 30,000 rows or more and there is at least one incomplete sub process
The MultiProvider query is then restarted automatically and processed sequentially
What appears to be parallel processing, is actually sequential processing plus the startup phase of parallel processing.
Generally, it’s recommended that you keep the number of InfoProviders of a MultiProvider to no more than 10
However, even at 4-5 large InfoProviders you may experience performance degradation
19
More on MultiProviders and Parallel Processing
You can also change the number of dialogs (increase the use of parallel processing) in RSADMIN by changing the
settings for QUERY_MAX_WP_DIAG.
Consider deactivating parallel processing for those queries that are MultiProvider queries and have large result sets (and “hints” cannot be used)
Since SAP BW 3.0B SP14 , you can change the default value of 30,000 rows - Refer to SAP Note 629541, SAP Note 622841, SAP Note 607164, and SAP Note 630500
A larger number of base InfoProviders is likely to result in a scenario where there are many more base InfoProviders than available dialog processes, which results in limited parallel processing and many pipelined sub-queries
20
SAP BW 7.0 Performance - Data Activation
With BW 7.01 we can disable delta consistency check for write-optimized DataStore objects. This protects delta requests that have been already propagated per delta mode from deletion.
This can be switched on/off – e.g. for write-optimized DataStore objects as initial staging layer. When doing so, significant load performance benefits can be achieved (10-30%).
Higher benefits are obtained from very large InfoProviders with thousands of requests.
21
Semantic partitioned object in SAP BW 7.3
• In BW 7.3 SPO is introduced to help partition InfoCubes for query performance, and DSOs for load performance.
SPOs can be added to MultiProviders for simpler query administration and to mask complexity
Source: SAP AG, 2010
BW 7.3 provides wizards to help you partition objects by year, business units or products.
BW also generate automatically all needed DTP such as transformation rules and filters to load the correct infoProvider.
• SAP suggests that this will make the maintenance is easier since any remodeling only need to change the reference structure.
22
Agenda
• Background• The Right Design
InfoCubes, DSOs and MultiProviders• Performance at All Levels
Query Design and BOBJ Tips Building and Using Aggregates Using MDX and OLAP Cache correctly Some Hardware settings BW- Accelerator
• What is New in BW 7.3• EarlyWatch Reports
23
Query Read Modes
There are three query read modes that determines the amount of data to be fetched from a database and sent to the application server:
1. Read all data All data is read from a database and stored in user memory space
2. Read data during navigation Data is read from a database only on demand during navigation
3. Read data during navigation and when expanding the hierarchyData is read when requested by users in navigation
Key Feature: Reading data during navigation minimizes the impact on the application server resources because only data
that the user requires will be retrieved
24
Recommendation: Query Read Mode for Large Hierarchies
• Reserve the Read all data mode for special queries— i.e. when a majority of the users need a given query to slice and dice against all dimensions, or data mining This places heavy demand on database and memory resources and may impact
other BW processes A query read mode can be defined on an individual query or as a default for new
queries (transaction RSRT)
• For queries involving large hierarchies, it is smart to select Read data during navigation and when expanding this option to avoid reading data for the hierarchy nodes that are not expanded.
SAP's recommendations for OLAP Universes & Ad-Hoc analysis (formerly: 'Webi'):
1. Use of hierarchy variable is recommended2. Hierarchy support in SAP Web Intelligence for SAP BW is limited3. The Use Query Drill option significantly improves drilldown performance4. Look at the 'Query Stripping' option for power users.
25
Reduce the use of conditions-and-exceptions reporting
This approach separates the drill-down steps. In addition to accelerating query processing, it provides the user more manageable portions of data.
This generates additional data transfer between database & application servers
If conditions and exceptions have to be used, the amount of data to be processed should be minimized with filtersWhen multiple drilldowns are required, separate the drilldown steps by using free
characteristics rather than rows and columns
BENEFIT: This results in a smaller initial result set, and therefore faster query processing and data transport as compared to a query where all characteristics are in rows
Conditions & exceptions are usually processed by the application server
26
Performance settings for Query Execution
This decides how many records are read during navigation.
Examine the request status when reading the InfoProvider
In 7.x BI: OLAP engine can read deltas into the cache. Does not invalidate existing query cache.
Displays the level of statistics collected.
Turn off/on parallel processing
When will the query program be regenerated based
on databasestatistics
27
Filters in Queries
Using filters contributes to reducing the number of database reads and the size of the result set, thereby significantly improving query runtimes.
Filters are especially valuable when associated with large dimensions, where there is a large number of characteristics such as customers and document numbers.
If large reports have to be produced, leverage the BEx Broadcaster to generate batch reports and pre-deliver them each morning to their email,
PDF or printer.
28
The RSRT Transaction to examine slow queries
P1 of 4
The RSRT transaction is one of the most beneficial transaction to examine the query performance and to conduct 'diagnostic' on slow queries.
29
Do you need an aggregate - some hints
This suggests that an Aggregate would have been beneficial
P2 of 4
30
Get Database Info
In this example, the basis team should be involved
to research why the Oracle settings are not
per SAP's recommendation
The RSRT and RSRV codes are key for
debugging and analyzing slow queries.
P3 of 4
31
Get Design Feedback in RSRT
We can see that the system creates a yellow flag for the 6 base cubes in the MultiProvider and the yellow flag for the 14 free chars.
HINT: Track front-end data transfers & OLAP performance by using RSTT in SAP 7.0 BI
(RSRTRACE in BW 3.5)
P4 of 4
32
Debug Queries using the transaction- RSRT
Using RSRT you can execute the query and see each breakpoint,
thereby debugging the query and see where the execution is slow.
Try running slow queries in debug mode with parallel processing deactivated to
see if they run faster.
33
When Restrictive Key Figures (RKF) are included in a query, conditioning is done for each of them during query execution. This is very time consuming and a high number of RKFs can seriously hurt query performance
My Recommendation: Reduce RKFs in the query to as few as possible. Also, define calculated & RKFs on the Infoprovider level instead of locally within the query. Why?:
The Performance Killers - Restrictive Key Figures
Benefit: Formulas within an Infoprovider are returned at runtime and held in cache.
Drawback: Local formulas and selections are calculated with each navigation step.
34
1."A large number of Key Figures in the BEx query will incur a significant performance penalty when running queries, regardless of whether the Key Figures are included in the universe or used in the SAP BusinessObjects Ad-hoc (WebI) query
2.Only include KFs used for reporting in the BEx query
3.This performance impact is due to time spent loading metadata for units, executed for all measures in the query".
SAP's recommendation for Key Figures in OLAP universes
After SAP BusinessObjects Enterprise XI 3.1 FP 1.1, the impact of large number of key figures was somewhat reduced by
retrieving metadata information only when the unit/currency metadata info is selected in the Webi Query
35
Calculated Key Figures (CKF) are computed during run-time, and a many CKFs can slow down the query performance.
How to fix this: Many of the CKF can be done during data loads & physically stored in the InfoProvider. This reduces the number of computations and the query can use simple table reads instead. Do not use total rows when not required (this require additional processing on the OLAP side).
The Performance Killers - Calculated Key Figure
SAP's recommendation for OLAP universes: "RKF and CKF should be built as part of the underlying BEx query to use the SAP BW back-end processing for
better performance
Queries with a larger set of such KFs should use the “Use Selection of Structure Members” option in the Query Monitor (RSRT) to leverage the OLAP engine"
36
Sorting is done by the BI Analytical Engine. Like all computer systems, sorting data in a reports with large result sets can be time consuming.
The BI Analytical Engine and Sorting
Reducing the text in query will also speed up the processing some.
Try reducing the number of sorts in the 'default view'. This may improve the report execution & provide the users with data faster. User can then choose to sort the data themselves.
37
Agenda
• Background• The Right Design
InfoCubes, DSOs and MultiProviders• Performance at All Levels
Query Design and BOBJ Tips Building and Using Aggregates Using MDX and OLAP Cache correctly Some Hardware settings BW- Accelerator
• What is New in BW 7.3• EarlyWatch Reports
38
Correct Aggregates Are Easy to Build
We can create proposals from the query, last navigation by users, or by BW statistics
Create aggregate proposals based on BW statistics. For example:• Select the run time of queries to
be analyzed • Select time period to be analyzed• Only those queries executed in this
time period will be reviewed to create the proposal
Create aggregate proposals based on queries that are performing poorly.
Activate the aggregate
1. Click on Jobs to see how the program is progressing
The process of turning 'on' the aggregates is simple
Fill aggregate with summary data
41
Agenda
• Background• The Right Design
InfoCubes, DSOs and MultiProviders• Performance at All Levels
Query Design and BOBJ Tips Building and Using Aggregates Using MDX and OLAP Cache correctly Some Hardware settings BW- Accelerator
• What is New in BW 7.3• EarlyWatch Reports
42
Different Uses of the MDX and the OLAP Cache
The OLAP Cache is used by BW as the core in-memory data set. It retrieves the data from the server if the data set is available.
The Cache is based on First-in --> Last out.
This means that the query result set that was accessed by one user at 8:00am may no longer be available in-memory when another user is accessing it at 1:00pm.
Therefore, queries may appear to run slower sometimes.
The MDX cache is used by MDX based interfaces, including the OLAP Universe.
43
Use the BEx Broadcaster to Pre-Fill the Cache
Distribution Types
You can increase query speed by broadcasting the query result of commonly used queries to the cache.
Users do not need to execute the query from the database. Instead the result is already in the system memory (much faster).
44
Agenda
• Background• The Right Design
InfoCubes, DSOs and MultiProviders• Performance at All Levels
Query Design and BOBJ Tips Building and Using Aggregates Using MDX and OLAP Cache correctly Some hardware settings BW- Accelerator
• What is New in BW 7.3• EarlyWatch Reports
45
The Memory Cache Size
The OLAP Cache is by default 100 MB for local and 200 MB for global use
This may be too low...
WARNING: The Cache is not used when a query contains a virtual key figure or virtual characteristics, or when the query is accessing a transactional DSO, or a virtual InfoProvider
Look at available hardware and work with you basis team to see if you can increase this.
If you decide to increase the cache, use the transaction code RSCUSTV14.
46
Monitor Application Servers and Adjust Cache Size
To monitor the usage of the cache on each of the application servers, use transaction code RSRCACHE and also periodically review the analysis of load distribution using ST03N – Expert Mode
PS! The size of OLAP Cache is physically limited by the amount of memory set in system parameter rsdb/esm/buffersize_kb.
The settings are available in RSPFPAR and RZ11.
47
The Four Options for OLAP Cache Persistence Settings
CACHE OLAP Persistence settingsNote When What t-code
Default Flatfile
Change the logical file BW_OLAP_CACHE when installing the system (not valid name) FILE
Optional Cluster table Medium and small result setsRSR_CACHE_DBS_IX RSR_CACHE_DB_IX
OptionalBinary Large Objects (blob) Best for large result sets
RSR_CACHE_DBS_BLRSR_CACHE_DB_BL
Available since BW 7.0 SP 14
Blob/Cluster Enhanced
No central cache directory or lock concept (enqueue). The mode is not available by default.
Set RSR_CACHE_ACTIVATE_NEW RSADMIN VALUE=x
Source: SAP AG 2010.
48
Application Server Memory Usage
Roll memory was never maxed out in the period
Paging memory was never maxed out in the period
Extended memory was never maxed out in the period
Only 3Gb of 9 Gb of heap memory was ever used
In this real example, be the judge. Do we:a) Need another application server?b) Need to upgrade the application server with more hardware?c) Performance tune the application?
49
Agenda
• Background• The Right Design
InfoCubes, DSOs and MultiProviders• Performance at All Levels
Query Design and BOBJ Tips Building and Using Aggregates Using MDX and OLAP Cache correctly Some Hardware settings BW- Accelerator
• What is New in BW 7.3• EarlyWatch Reports
50
Why In-memory processing?
Disk speed is growing slower than all other hardware components
In-memory data stores
Multi-channel UI, high event volume, cross industry value chains
Application-aware and intelligent data management
Disk-based data storage
Simple consumption of apps (fat client UI, EDI)
General-purpose, application-agnostic database
1990 2010
Architectural Drivers Improvement20101990
216 Addressable
Memory
2502x50.15MB/$
0.02MB/$
Memory
5066x253.31MIPS/$
0.05MIPS/$
CPU
Technology Drivers
600MBPS
5MBPS
Disk Data Transfer
120x
1000 x100Gbps
100Mbps
Network Speed
264 248x
Source: 1990 numbers SAP AG ; 2010 numbers, Dr. Berg
Physical hard drive speeds only grew by 120 times since 1990. All other hardware components grew faster.
51
In Memory Processing - General Highlights - BWA
SAP BW
InfoCubes
DSOs
1. Indexing and compression stored on a file system
2. Indexes copied into RAM on blades
BI Analytical Engine
3. Queries are routed to BWA by the Analytical engine
52
BI Analytical Engine’s Query Executing Priorities
Query ExecutionWithout SAP NetWeaver
BW Accelerator
Query ExecutionWith SAP NetWeaver
BW Accelerator
Information Broadcasting /Precalculation
Query Cache
Aggregates
InfoProvider
Information Broadcasting /Precalculation
Query Cache
SAP BW Accelerator
Aggregates can be replaced with SAP BW Accelerator, while the memory cache is still useful.
The major improvement is to make query
execution more predictable and
overall faster
Seconds
Num
ber o
f Que
ries
Num
ber o
f Que
ries
Seconds
BW Accelerator Performance Increases - real example
54
The Admin work is done through a single interface
The admin interface is available under the transaction code RSDDBWAMON.
Health checks for SAP BW Accelerator are available under the transaction code RSRV
Plan for 2-5 days of SAP BW Accelerator training. You need a maximum of 1-2 administrators (1 for backup)
SAP BW Accelerator Administration is minimal and simple
55
The SAP BW Accelerator interface allows you to compare the data in SAP BW vs. the indexes. This means that you can easily check if they are outdated.
Other tools include the ability to run queries to see if the numbers in the two databases match.
Health-Checks and Reconciliation
56
The Analysis and Repair options include many proposals and time estimation tools that you should leverage.
The interface can propose delta-indexes for periodic updates (not complete builds).
You can estimate the run-time of indexing the fact table of an InfoCube before you place it into a process chain or a manual job.
You can also estimate the memory you need before you add new records into memory.
Proposals and Estimations
57
The simple way to fix most issues is to delete all indexes and rebuild them during a weekend
Think of this as the ultimate “reset” button
You can also rebuild master data indexes
The SAP BW Accelerator “Reset Button”
58
Agenda
• Background• The Right Design
InfoCubes, DSOs and MultiProviders• Performance at All Levels
Query Design and BOBJ Tips Building and Using Aggregates Using MDX and OLAP Cache correctly Some Hardware settings BW- Accelerator
• What is New in BW 7.3• EarlyWatch Reports
59
SAP BW 7.3 Performance - Data Movement & Activation
BW version 7.3 has significant performance benefits:
1. Semantic Partitioned Objects (SPO) as we already covered.
2. Improved data activation due to new package fetch of active table instead of single lookups. The new 7.3 runtime option “new, unique data records only” prevents all lookups during activation.
3. A new monitor in BW Administration Cockpit so that database usage can be tracked.
60
Agenda
• Background• The Right Design
InfoCubes, DSOs and MultiProviders• Performance at All Levels
Query Design and BOBJ Tips Building and Using Aggregates Using MDX and OLAP Cache correctly Some Hardware settings BW- Accelerator
• What is New in BW 7.3• EarlyWatch Reports
61
EarlyWatch Reports in Solution Manager 4.0
EarlyWatch reports provide a simple way to confirm how your system is running and to catch problems
A “goldmine” for system recommendations
EarlyWatch reports are available since Solution manager version 3.2 SP8.
The more statistics cubes you have activated in BW, the better usage information you will get.
Depending on your version of SAP BW, you can activate 11-13 InfoCubes. Also, make sure you capture statistics at the query level (set it to 'all').
System issues can be hard to pin-down without access to EarlyWatch reports. Monitoring reports allows you to
tune the system before user complains
62
Information about an pending 'disaster'
This system is about to 'crash'
The system is growing by 400+ Gb per month, the app server is 100% utilized and the Db server is at 92%.
This customer needed to improve the
hardware to get the query performance to an acceptable level
63
Inconsistent patches may be caught
In this example, we see that the EarlyWatch report found many known issues at the Oracle level that should be implemented before the performance tuning effort started.
Before the patches were applied it took 24 to 26 minutes to execute some queries, after the fixes, the queries ran at less then two minutes.
SAP Note number
Description
841728 Oracle 10.2.0: Composite note for problems and workarounds871096 Oracle Database 10g: Patch sets/Patches for 10.2.0871735 Current Patchset for Oracle 10.2.0850306 Oracle Critical Patch Update Program
1021454 Oracle Segment Shrinking may cause LOB corruption.952388 Kernel <= 6.40:UNIX error due to 9i Client software
64
Performance tuning presentations, tutorials & articles www.ComeritInc.Com
SAP SDN Community web page for Business Intelligence Performance Tuning https://www.sdn.sap.com/irj/sdn/bi-performance-tuning
ASUG407 - SAP BW Query Performance Tuning with Aggregates by Ron Silberstein (requires SDN or Marketplace log-on). 54 min movie.https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/media/uuid/d9fd84ad-0701-0010-d9a5-ba726caa585d
Large scale testing of SAP BI Accelerator on a NetWeaver Platformhttps://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b00e7bb5-3add-2a10-3890-e8582df5c70f
More at: