Date post: | 20-Jan-2015 |
Category: |
Technology |
Upload: | antonios-chatzipavlis |
View: | 527 times |
Download: | 2 times |
1
Performance Tuning in SQL ServerAntonios ChatzipavlisSoftware Architect , Development Evangelist, IT Consultant
MCT, MCITP, MCPD, MCSD, MCDBA, MCSA, MCTS, MCAD, MCP, OCA
2
Objectives
• Why is Performance Tuning Necessary?
• How to Optimize SQL Server for performance
• How to Optimize Database for performance
• How to Optimize Query for performance
• Define and implement monitoring standards for database servers and instances
• How to troubleshoot SQL Server
3
Why is performance tuning necessary?
Performance Tuning in SQL Server
4
Why is Performance Tuning Necessary?
• Allowing your system to scale
• Adding more customers
• Adding more features
• Improve overall system performance
• Save money but not wasting resources
• The database is typically one of the most expensive resources in a datacenter
5
General Scaling Options
• Purchase a larger server, and replace the existing system.
• Works well with smaller systems.
• Cost prohibitive for larger systems.
• Can be a temporary solution.
Scaling SQL Server with Bigger Hardware
6
General Scaling Options
• Purchase more hardware and split or partition the database.
• Partitioning can be either vertical or horizontal
• Vertical: Split the databases based on a specific demographic such as time zone or zip code.
• Horizontal: Split components out of one database into another
Scaling SQL Server with More Hardware
7
General Scaling Options
• Adjusting and rewriting queries.
• Adding indexes.
• Removing indexes.
• Re-architecting the database schema.
• Moving things that shouldn’t be in the database.
• Eliminating redundant work on the database.
• Caching of data.
• Other performance tuning techniques.
Scaling SQL Server without adding hardware
8
1. Database Partitioning
9
How to Optimize SQL Server for performance
Performance Tuning in SQL Server
10
Performance Factors
• CPU
• Memory
• IO
• Network
• TempDB
11
CPU and SQL Server
• CPU Intensive Operations
• Compression
• Bulk Load operations
• Compiling or Recompiling Queries
• Hyper-Threading
• Is just 1.3 times better than non hyper-threaded execution
• The currently accepted best practice recommendation is that you should run SQL Server with Hyper-Threading disabled
• L3 Cache
12
CPU and SQL Server
Counter Description Guidelines
Processor:% Processor Time
This counter monitors the amount of time the CPU spends executing a thread that is not idle
A consistent state of 80 percent to 90 percent may indicate the need to upgrade your CPU or add moreprocessors.
System: %Total Processor To determine the average for all processors
Processor: % Privileged Time
Corresponds to the percentage of time the processor spends on execution of Microsoft Windows kernel commands, such as processing of SQL Server I/O requests.
If this counter is consistently high when the Physical Disk counters are high, consider installing a faster or more efficient disk subsystem.
Processor: %User Time Corresponds to the percentage of time that the processor spends onexecuting user processes such as SQL Server.
System: Processor Queue Length
Corresponds to the number of threads waiting for processor time. A processor bottleneck develops when threads of a process require more processor cyclesthan are available.
If more than a few processes attempt to utilize the processor's time, you might need to install a faster processor. Or, if you have a multiprocessor system, you could add a processor.
Performance Counters
13
Memory and SQL Server
• Tuning 32-bit Systems
• Use /PAE and /3GB Together (Windows 2003)
• Running BCDEDIT /set increaseUserVA 3072 (Windows 2008)
• Tuning 64-bit Systems• If needed, enable AWE on Enterprise Edition of SQL Server
• If needed, enable AWE on Standard Edition of SQL Server only when SP1 with Cumulative Update 2 applied.
Read more info at http://support.microsoft.com/kb/970070
Enable Address Windowing Extensions (AWE)
14
Memory and SQL Server
• Control the allowable size of SQL Server’s buffer pool.
• Do not control all of SQL Server’s memory usage, just the buffer pool.
• When the SQL Server service starts, it does not acquire all the memory configured in Min Server Memory but instead starts with only the minimal required, growing as necessary.
• Once memory usage has increased beyond the Min Server Memory setting, SQL Server won’t release any memory below that figure.
• Max Server Memory is the opposite of Min Server Memory, setting a “ceiling” for the buffer pool
Min and Max Server Memory
15
Memory and SQL Server
• Look at the buffer pool’s maximum usage.
• Set SQL Server to dynamically manage memory
• Monitor MSSQLSERVER : Memory Manager\Total Server Memory (KB) counter using Performance Monitor
• Determine the maximum potential for non-buffer pool usage.
• 2GB for Windows
• xGB for SQL Server worker threads
• Each thread use 0.5MB on x86, 2MB on x64, and 4MB on Itanium.
• 1GB for multi-page allocations, linked servers, and other consumers of memory outside the buffer pool
• 1–3GB for other applications that might be running on the system, such as backup programs
How to configure Max Server Memory
16
Memory and SQL Server
• In 8-CPU cores and 16GB of RAM running SQL Server 2008 x64 and a third-party backup utility, you would allow the following:
• 2GB for Windows
• 1GB for worker threads (576 Χ 2MB rounded down)
• 1GB for MPAs, etc.
• 1GB for the backup program
• The total is 5GB, and you would configure Max Server Memory to 11GB.
Example of Max Server Memory configuration
17
Memory and SQL Server
Counter Description Guidelines
Memory: Available Bytes Indicates how many any bytes of memory are currently available for use by Processes
Low values can indicate that there is an overall shortage of memory on computer or that an application is not releasing memory
Memory: Pages/sec Indicates the number of pages that either were retrieved from disk due to hard page faults or written to disk to free space in the working set due to page faults.
A high rate could indicate excessive paging. Monitor the Memory: Page Faults/sec counter to make sure that the disk activity is not caused by paging.
Process - Page Faults/sec(sql server instance)
Windows Virtual MemoryManager takes pages from SQL Server and other processes as it trims the working-set sizes of those processes.
A high number indicates excessive paging and disk thrashing. Use this counter to check whether SQL Server or another process is causing the excessive paging.
SQL Server: Buffer Manager-Buffer Cache Hit Ratio
Monitors the percentage of required pages found in the buffer cache, without reading from hard disk.
Add more memory until the value is consistently greater than 90 percent.
SQL Server: Buffer Manager-Total Pages
Monitors the total number of pages in the buffer cache, including database, free, and stolen pages from other processes.
A low number may indicate frequent disk I/O or thrashing. Consider adding more memory.
SQL Server: Memory Manager-Total Server Memory (KB)
Monitors the total amount of dynamic memory that the server is using.
If this counter is consistently high in comparison to the amount of physical memory available, more memory may be required.
Performance Counters
18
IO and SQL Server
• RAID 5 • Loved by storage administrators
• Dominated choice for non-database applications
• It’s cost effective and cost efficient
• Minimize the space required in the datacenter (fewer drives need fewer bays)
• RAID 10• Microsoft recommendation for log files
• Storage Area Networks (SANs)• Performance is not always predictable if two servers share the same
drive
• iSCSI Storage Area Networks• For good performance needs dedicated switches.
Choose the right hard disk subsystem
19
IO and SQL Server
• Best practices dictate that SQL Server
• data files,
• logs,
• tempdb files
• backup files
are all written to separate arrays
• Put log files on RAID 10
• Put data files on RAID 5 (to save money)
Choosing Which Files to Place on Which Disks
20
IO and SQL Server
• Increase IO performance but has CPU penalty
• The SQL Server engine has to compress the data before writing the page, and decompress the data after reading the page
• However, in practice this penalty is far outweighed by the time saved waiting for storage. Read more at http://msdn.microsoft.com/en-us/library/dd894051.aspx
• Example: If a 10GB index is compressed down to 3GB, then an index scan will be completed 70% faster simply because the data takes less time to read off the drives.
• Is Enterprise Edition feature
Using Compression to Gain Performance
21
IO and SQL Server
Counter Description Guidelines
% Disk Time Monitors the percentage of time that the disk is busy with read/write activity.
If this counter is high (more than 90 percent), check the Current Disk Queue Length counter.
Avg. Disk Queue Length Monitors the average number of read/write requests that are queued.
This counter should be no more than twice the number of spindles.
Current Disk Queue Length Monitors the current number of read/write requests that are queued.
This counter should be no more than twice the number of spindles
Performance Counters
• Monitor the Page Faults/sec counter in the Memory object to make sure that the disk activity is not caused by paging.
• If you have more than one logical partition on the same hard disk, use the Logical Disk counters rather than the Physical Disk counters.
22
1. Use Performance Monitor
23
How to Optimize Database for performance
Performance Tuning in SQL Server
24
Performance Optimization Model
Server Tuning
Locking
Indexing
Query Optimization
Schema Design
25
Schema Design Optimization
• In this process you organize data to minimize redundancy, which eliminates duplicated data and logical ambiguities in the database
Normalization
Normal Form Description
FirstEvery attribute is atomic, and there are no repeating groups
SecondComplies with First Normal Form, and all non-key columns depend on the whole key
ThirdComplies with Second Normal Form, and all non-key columns are non-transitively dependent upon the primary key
26
Schema Design Optimization
• In this process you re-introduce redundancy to the database to optimize performance.
• When to use denormalization:
• To pre-aggregate data
• To avoid multiple/complex joins
• When not to use denormalization:
• To prevent simple joins
• To provide reporting data
• To prevent same row calculations
Denormalization
27
Schema Design Optimization
• In this process you group similar entities together into a single entity to reduce the amount of required data access code.
• Use generalization when:
• A large number of entities appear to be of the same type
• Multiple entities contain the same attributes
• Do not use generalization when:
• It results in an overly complex design that is difficult to manage
Generalization
28
Schema Design OptimizationGeneralization Example
29
How to Optimize Query for performance
Performance Tuning in SQL Server
30
Key Measures for Query Performance
Key factors for query performance:
SQL Server tools to measure query performance:
Resources used to execute the query
Time required for query execution
Performance Monitor
SQL Server Profiler
31
Logical Execution of Query
32
Logical Execution of QueryExample Data
customerid city
ANTON Athens
CHRIS Salonica
FANIS Athens
NASOS Athens
Orderid customerid
1 NASOS
2 NASOS
3 FANIS
4 FANIS
5 FANIS
6 CHRIS
7 NULL
33
Logical Execution of QueryExample Query & Results
SELECT C.customerid, COUNT(O.orderid) AS numordersFROM dbo.Customers AS C
LEFT OUTER JOIN dbo.Orders AS O ON C.customerid = O.customerid
WHERE C.city = 'Athens' GROUP BY C.customerid
HAVING COUNT(O.orderid) < 3 ORDER BY numorders;
Customerid numorders
ANTON 0
NASOS 2
34
Logical Execution of Query1st Step - Cross Join
Customerid City Orderid customerid
ANTON Athens 1 NASOS
ANTON Athens 2 NASOS
ANTON Athens 3 FANIS
ANTON Athens 4 FANIS
ANTON Athens 5 FANIS
ANTON Athens 6 CHRIS
ANTON Athens 7 NULL
CHRIS Salonica 1 NASOS
CHRIS Salonica 2 NASOS
CHRIS Salonica 3 FANIS
CHRIS Salonica 4 FANIS
CHRIS Salonica 5 FANIS
CHRIS Salonica 6 CHRIS
CHRIS Salonica 7 NULL
FANIS Athens 1 NASOS
FANIS Athens 2 NASOS
FANIS Athens 3 FANIS
FANIS Athens 4 FANIS
FANIS Athens 5 FANIS
FANIS Athens 6 CHRIS
FANIS Athens 7 NULL
NASOS Athens 1 NASOS
NASOS Athens 2 NASOS
NASOS Athens 3 FANIS
NASOS Athens 4 FANIS
NASOS Athens 5 FANIS
NASOS Athens 6 CHRIS
NASOS Athens 7 NULL
FROM dbo.Customers AS C ... JOIN dbo.Orders AS O
35
Logical Execution of Query2nd Step- Apply Join condition ON Filter
ON C.customerid = O.customeridCustomerid City Orderid customerid ΟΝ Filter
ANTON Athens 1 NASOS FALSE
ANTON Athens 2 NASOS FALSE
ANTON Athens 3 FANIS FALSE
ANTON Athens 4 FANIS FALSE
ANTON Athens 5 FANIS FALSE
ANTON Athens 6 CHRIS FALSE
ANTON Athens 7 NULL UNKNOWN
CHRIS Salonica 1 NASOS FALSE
CHRIS Salonica 2 NASOS FALSE
CHRIS Salonica 3 FANIS FALSE
CHRIS Salonica 4 FANIS FALSE
CHRIS Salonica 5 FANIS FALSE
CHRIS Salonica 6 CHRIS TRUE
CHRIS Salonica 7 NULL UNKNOWN
FANIS Athens 1 NASOS FALSE
FANIS Athens 2 NASOS FALSE
FANIS Athens 3 FANIS TRUE
FANIS Athens 4 FANIS TRUE
FANIS Athens 5 FANIS TRUE
FANIS Athens 6 CHRIS FALSE
FANIS Athens 7 NULL UNKNOWN
NASOS Athens 1 NASOS TRUE
NASOS Athens 2 NASOS TRUE
NASOS Athens 3 FANIS FALSE
NASOS Athens 4 FANIS FALSE
NASOS Athens 5 FANIS FALSE
NASOS Athens 6 CHRIS FALSE
NASOS Athens 7 NULL UNKNOWN
Customerid City Orderid customerid
CHRIS Salonica 6 CHRIS
FANIS Athens 3 FANIS
FANIS Athens 4 FANIS
FANIS Athens 5 FANIS
NASOS Athens 1 NASOS
NASOS Athens 2 NASOS
36
Logical Execution of Query3rd Step - Apply OUTER Join
FROM dbo.Customers AS C LEFT OUTER JOIN dbo.Orders AS O
Customerid City Orderid customerid
CHRIS Salonica 6 CHRIS
FANIS Athens 3 FANIS
FANIS Athens 4 FANIS
FANIS Athens 5 FANIS
NASOS Athens 1 NASOS
NASOS Athens 2 NASOS
ΑΝΤΟΝ Athens NULL NULL
37
Logical Execution of Query4th Step - Apply WHERE filter
WHERE C.city = 'Athens'
Customerid City Orderid customerid
FANIS Athens 3 FANIS
FANIS Athens 4 FANIS
FANIS Athens 5 FANIS
NASOS Athens 1 NASOS
NASOS Athens 2 NASOS
ΑΝΤΟΝ Athens NULL NULL
38
Logical Execution of Query5th Step - Apply Grouping
GROUP BY C.customerid
Customerid City Orderid customerid
FANIS Athens 3 FANIS
FANIS Athens 4 FANIS
FANIS Athens 5 FANIS
NASOS Athens 1 NASOS
NASOS Athens 2 NASOS
ΑΝΤΟΝ Athens NULL NULL
39
Logical Execution of Query6th Step - Apply Cube or Rollup
40
Logical Execution of Query7th Step - Apply HAVING Filter
HAVING COUNT(O.orderid) < 3
Customerid City Orderid customerid
NASOS Athens 1 NASOS
NASOS Athens 2 NASOS
ΑΝΤΟΝ Athens NULL NULL
41
Logical Execution of Query8th Step - Apply SELECT List
SELECT C.customerid, COUNT(O.orderid) AS numorders
Customerid numorders
NASOS 2
ANTON 0
42
Logical Execution of Query9th Step - Apply DISTINCT
43
Logical Execution of Query10th Step - Apply ORDER BY
ORDER BY numorders
Customerid numorders
ANTON 0
NASOS 2
44
Logical Execution of Query11th Step - Apply TOP
45
Logical Execution of QueryGet the Result
Customerid numorders
ANTON 0
NASOS 2
46
Top 10 for Building Efficient Queries
Performance Tuning in SQL Server
How to Optimize Query for performance
47
Top 10 for Building Efficient Queries
• The most important factor to consider when tuning queries is how to properly express logic in a set-based manner.
• Cursors or other procedural constructs limit the query optimizer’s ability to generate flexible query plans.
• Cursors can therefore reduce the possibility of performance improvements in many situations
1.Favor set-based logic over procedural or cursor logic
48
Top 10 for Building Efficient Queries
• The query optimizer can often produce widely different plans for logically equivalent queries.
• Test different techniques, such as joins or subqueries, to find out which perform better in various situations.
2.Test query variations for performance
49
Top 10 for Building Efficient Queries
• You must work with the SQL Server query optimizer, rather than against it, to create efficient queries.
• Query hints tell the query optimizer how to behave and therefore override the optimizer’s ability to do its job properly.
• If you eliminate the optimizer’s choices, you might limit yourself to a query plan that is less than ideal.
• Use query hints only when you are absolutely certain that the query optimizer is incorrect.
3.Avoid query hints.
50
Top 10 for Building Efficient Queries
• Since the query optimizer is able to integrate subqueriesinto the main query flow in a variety of ways, subqueriesmight help in various query tuning situations.
• Subqueries can be especially useful in situations in which you create a join to a table only to verify the existence of correlated rows. For better performance, replace these kinds of joins with correlated subqueries that make use of the EXISTS operator
4.Use correlated subqueries to improve performance.
--Using a LEFT JOINSELECT a.parent_key FROM parent_table a LEFT JOIN child_table b ON a.parent_key = b.parent_key WHERE B.parent_key IS NULL--Using a NOT EXISTSSELECT a.parent_key FROM parent_table a WHERE NOT EXISTS (SELECT * FROM child_table b WHERE a.parent_key =b.parent_key)
51
Top 10 for Building Efficient Queries
• Scalar user-defined functions, unlike scalar subqueries, are not optimized into the main query plan.
• Instead, you must call them row-by-row by using a hidden cursor.
• This is especially troublesome in the WHERE clause because the function is called for every input row.
• Using a scalar function in the SELECT list is much less problematic because the rows have already been filtered in the WHERE clause.
5. Avoid using a scalar user-defined function in the WHERE clause.
52
Top 10 for Building Efficient Queries
• In contrast to scalar user-defined functions, table-valued functions are often helpful from a performance point of view when you use them as derived tables.
• The query processor evaluates a derived table only once per query.
• If you embed the logic in a table-valued user-defined function, you can encapsulate and reuse it for other queries.
6.Use table-valued user-defined functions as derived tables.
CREATE FUNCTION Sales.fn_SalesByStore (@storeid int)RETURNS TABLE AS RETURN(
SELECT P.ProductID, P.Name, SUM(SD.LineTotal) AS 'YTD Total‘ FROM Production.Product AS PJOIN Sales.SalesOrderDetail AS SD ON SD.ProductID = P.ProductID JOIN Sales.SalesOrderHeader AS SH ON SH.SalesOrderID = SD.SalesOrderID WHERE SH.CustomerID = @storeid GROUP BY P.ProductID, P.Name
)
53
Top 10 for Building Efficient Queries
• Use a subquery instead.
• The process of grouping rows becomes more expensive as you add more columns to the GROUP BY list.
• If your query has few column aggregations but many non-aggregated grouped columns, you might be able to refactorit by using a correlated scalar subquery.
• This will result in less work for grouping in the query and therefore possibly better overall query performance.
7.Avoid unnecessary GROUP BY columns
SELECT p1.ProductSubcategoryID, p1.NameFROM Production.Product p1 WHERE p1.ListPrice >( SELECT AVG (p2.ListPrice) FROM Production.Product p2
WHERE p1.ProductSubcategoryID = p2.ProductSubcategoryID)
54
Top 10 for Building Efficient Queries
• The CASE expression is one of the most powerful logic tools available to T-SQL programmers.
• Using CASE, you can dynamically change column output on a row-by-row basis.
• This enables your query to return only the data that is absolutely necessary and therefore reduces the I/O operations and network overhead that is required to assemble and send large result sets to clients.
8.Use CASE expressions to include variable logic in a query
55
Top 10 for Building Efficient Queries
• The query optimizer’s main strategy is to find query plans that satisfy queries by using single operations.
• Although this strategy works for most cases, it can fail for larger sets of data because the huge joins require so much I/O overhead.
• In some cases, a better option is to reduce the working set by using temporary tables to materialize key parts of the query. You can then join the temporary tables to produce a final result.
• This technique is not favorable in heavily transactional systems because of the overhead of temporary table creation, but it can be very useful in decision support situations.
9. Divide joins into temporary tables when you query very large tables.
56
Top 10 for Building Efficient Queries
• Rebuild logic as multiple queries
• Rebuild logic as a user-defined function
• Rebuild logic as a complex query with a case expression
10. Refactoring Cursors into Queries.
58
1. Query optimization
2. Cursor refactoring
59
Stored Procedures and Views
Performance Tuning in SQL Server
Best Practices for
60
Stored Procedures
• Avoid using “sp_” as name prefix
• Avoid stored procedures that accept parameters for table names
• Use the SET NOCOUNT ON option in stored procedures
• Limit the use of temporary tables and table variables in stored procedures
• If a stored procedure does multiple data modification operations, make sure to enlist them in a transaction.
• When working with dynamic T-SQL, use sp_executesqlinstead of the EXEC statement
Best Practices
61
Views
• Use views to abstract complex data structures
• Use views to encapsulate aggregate queries
• Use views to provide more user-friendly column names
• Think of reusability when designing views
• Avoid using the ORDER BY clause in views that contain a TOP 100 PERCENT clause.
• Utilize indexes on views that include aggregate data
Best Practices
62
Optimizing an Indexing Strategy
Performance Tuning in SQL Server
63
Index Architecture
Clustered Nonclustered
64
Types of Indexes
• Clustered
• Nonclustered
• Unique
• Index with included column
• Indexed view
• Full-text
• XML
65
Guidelines for designing indexes
• Examine the database characteristics. For example, your indexing strategy will differ between an online transaction processing system with frequent data updates and a data warehousing system that contains primarily read-only data.
• Understand the characteristics of the most frequently used queries and the columns used in the queries. For example, you might need to create an index on a query that joins tables or that uses a unique column for its search argument.
• Decide on the index options that might enhance the performance of the index. Options that can affect the efficiency of an index include FILLFACTOR and ONLINE.
• Determine the optimal storage location for the index. You can choose to store a nonclustered index in the same filegroup as the table or on a different filegroup. If you store the index in a filegroup that is on a different disk than the table filegroup, you might find that disk I/O performance improves because multiple disks can be read at the same time.
• Balance read and write performance in the database. You can create many nonclustered indexes on a single table, but it is important to remember that each new index has an impact on the performance of insert and update operations. This is because nonclustered indexes maintain copies of the indexed data. Each copy of the data requires I/O operations to maintain it, and you might cause a reduction in write performance if the database has to write too many copies. You must ensure that you balance the needs of both select queries and data updates when you design an indexing strategy.
• Consider the size of tables in the database. The query processor might take longer to traverse the index of a small table than to perform a simple table scan. Therefore, if you create an index on a small table, the processor might never use the index. However, the database engine must still update the index when the data in the table changes.
• Consider the use of indexed views. Indexes on views can provide significant performance gains when the view contains aggregations, table joins, or both.
66
Nonclustered Index
• Create a nonclustered index for columns used for:
• Predicates
• Joins
• Aggregation
• Avoid the following when designing nonclustered indexes:
• Redundant indexes
• Wide composite indexes
• Indexes for one query
• Nonclustered indexes that include the clustered index
do’s & don’ts
67
Clustered Indexes
• Use clustered indexes for:
• Range queries
• Primary key queries
• Queries that retrieve data from many columns
• Do not use clustered indexes for:
• Columns that have frequent changes
• Wide keys
do’s & don’ts
68
1. Database Engine Tuning Advisor
69
Define and implement monitoring standards for database servers and instances
Performance Tuning in SQL Server
70
Monitoring Stages
Monitoring the database environment
Narrowing down a performance issue to a particular database environment area
Narrowing down a performance issue to a particular database environment object
Troubleshooting individual problems
Implementing a solution
Stage 1
Stage 5
Stage 4
Stage 3
Stage 2
71
Monitoring the database environment
• You must collect a broad range of performance data.
• The monitoring system must provide you with enough data to solve the current performance issues.
• You must set up a monitoring solution that collects data from a broad range of sources.
• Active data, you can use active collection tools
• System Monitor,
• Error Logs,
• SQL Server Profiler
• Inactive data you can use sources
• Database configuration settings,
• Server configuration settings,
• Metadata from SQL Server installation and databases
73
Guidelines for Auditing and Comparing Test Results
• Scan the outputs gathered for any obvious performance issues.
• Automate the analysis with the use of custom scripts and tools.
• Analyze data soon after it is collected. • Performance data has a short life span, and if there is a delay, the quality of the
analysis will suffer.
• Do not stop analyzing data when you discover the first set of issues.
• Continue to analyze until all performance issues have been identified.
• Take into account the entire database environment when you analyze performance data.
74
Monitoring Tools
• SQL Server Profiler
• System Monitor
• SQLDIAG
• DMVs for Monitoring
• Performance Data Collector
• SQLNexus (CodePlex)
• SQLIO
75
SQL Server Profiler guidelines
• Schedule data tracing for peak and nonpeak hours
• Use Transact-SQL to create your own SQL Server Profiler traces to minimize the performance impact of SQL Server Profiler.
• Do not collect the SQL Server Profiler traces directly into a SQL Server table.
• After the trace has ended, use fn_trace_gettable function to load the data into a table.
• Store collected data on a computer that is not the instance that you are tracing.
76
System Monitor guidelines
• Execute System Monitor traces at different times during the week, month.
• Collect data every 36 seconds for a week.
• If the data collection period spans more than a week, set the collection time interval in the range of 300 to 600 seconds.
• Collect the data in a comma-delimited text file. You can load this text file into SQL Server Profiler for further analysis.
• Execute System Monitor on one server to collect the performance data of another server.
77
DMVs for Monitoring
DMV Description
sys.dm_os_threads Returns a list of all SQL Server Operating System threads that are running under the SQL Server process.
sys.dm_os_memory_pools Returns a row for each object store in the instance of SQL Server. You can use this view to monitor cache memory use and to identify bad caching behavior
sys.dm_os_memory_cache_counters Returns a snapshot of the health of a cache, provides run-time information about the cache entries allocated, their use, and the source of memory for the cache entries.
sys.dm_os_wait_stats Returns information about all the waits encountered by threads that executed. You can use this aggregated view to diagnose performance issues with SQL Server and also with specific queries and batches.
sys.dm_os_sys_info Returns a miscellaneous set of useful information about the computer, and about the resources available to and consumed by SQL Server.
78
Performance Data Collector
• Management Data Warehouse
• Performance Data Collection• Performance data collection components
• System collection sets
• User-defined collection sets
• Reporting
• Centralized Administration: Bringing it all togetherPerformance Data Collection and Reporting
79
1. Resource Governor
2. SQL Server Profiler (if time permits)
80
Troubleshoot SQL Server concurrency issues
Performance Tuning in SQL Server
81
Transaction Isolation Levels
• Read uncommitted
• Read committed
• Repeatable read
• Snapshot
• Serializable
82
Reduce Locking and Blocking
• Keep logical transactions short
• Avoid cursors
• Use efficient and well-indexed queries
• Use the minimum transaction isolation level required
• Keep triggers to a minimum
Guidelines
83
Minimizing Deadlocks
• Access objects in the same order.• Avoid user interaction in transactions.• Keep transactions short and in one batch.• Use a lower isolation level.• Use a row versioning–based isolation level.
• Set the READ_COMMITTED_SNAPSHOT database option ON to enable read-committed transactions to use row versioning.
• Use snapshot isolation.
• Use bound connections.• Allow two or more connections to share the same transaction and locks. • Can work on the same data without lock conflicts. • Can be created from multiple connections within the same application,
or from multiple applications with separate connections. • Make coordinating actions across multiple connections easier.• http://msdn.microsoft.com/en-us/library/aa213063(SQL.80).aspx
84
SQLschool.gr
• A dream
• Reliable source of knowledge for SQL Server
• http://www.autoexec.gr/blogs/antonch
85