+ All Categories
Home > Documents > Fundamentals ASM

Fundamentals ASM

Date post: 05-Apr-2018
Category:
Upload: alessandro-silveira
View: 230 times
Download: 0 times
Share this document with a friend

of 56

Transcript
  • 8/2/2019 Fundamentals ASM

    1/56

  • 8/2/2019 Fundamentals ASM

    2/56

    Session id: 40288

    Automatic Storage ManagementAutomatic Storage ManagementThe New Best PracticeThe New Best Practice

    Steve AdamsIxora

    Rich LongOracle Corporation

  • 8/2/2019 Fundamentals ASM

    3/56

    The Challenge

    Todays databases

    large

    growing

    Storage requirements

    acceptable performance

    expandable and scalable

    high availability low maintenance

  • 8/2/2019 Fundamentals ASM

    4/56

    Outline

    Introduction

    get excited about ASM

    Current best practices

    complex, demanding, but achievable

    Automatic storage management

    simple, easy, better

    Conclusion

  • 8/2/2019 Fundamentals ASM

    5/56

    Current Best Practices

    General principles to follow

    direct I/O

    asynchronous I/O

    striping

    mirroring

    load balancing

    Reduced expertise and analysis required avoids all the worst mistakes

  • 8/2/2019 Fundamentals ASM

    6/56

    Buffered I/O

    File

    System

    Cache

    Database Cache

    SGA

    PGA

    Reads

    stat: physical reads read from cache

    may requirephysical read

    Writes

    written to cache

    synchronously

    (Oracle waits untilthe data is safely ondisk too)

  • 8/2/2019 Fundamentals ASM

    7/56

    Direct I/O

    File

    System

    Cache

    I/O

    bypasses filesystem cache

    Memory

    file system cachedoes not containdatabase blocks(so its smaller)

    database cachecan be larger

    Database Cache

    SGA

    PGA

  • 8/2/2019 Fundamentals ASM

    8/56

    Buffered I/O Cache Usage

    Database Cache

    File

    System

    Cache

    Legend:

    hot datarecent warm data

    older warm data

    recent cold data

    o/s data

  • 8/2/2019 Fundamentals ASM

    9/56

    Direct I/O Cache Usage

    Database Cache

    Legend:

    hot datarecent warm data

    older warm data

    recent cold data

    o/s data

    File

    System

    Cache

  • 8/2/2019 Fundamentals ASM

    10/56

    Cache Effectiveness

    Buffered I/O

    overlap wastes memory caches single use data

    simple LRU policy

    file system cache hitsare relatively expensive

    extra physical read andwrite overheads

    floods file systemcache with Oracle data

    Direct I/O

    no overlap no single use data

    segmented LRU policy

    all cached data is foundin the database cache

    no physical I/Ooverheads

    non-Oracle datacached more effectively

  • 8/2/2019 Fundamentals ASM

    11/56

    Buffered Log Writes

    File

    System

    Cache

    Log BufferSGA Most redo log writes

    address part of a filesystem block

    File system reads the

    target block first then copies the data

    Oracle waits for both

    the read and the write a full disk rotation is

    needed in between

  • 8/2/2019 Fundamentals ASM

    12/56

    I/O Efficiency

    Buffered I/O

    small writes must wait for

    preliminary read

    large reads & writes

    performed as aseries of singleblock operations

    tablespace block sizemust match file systemblock size exactly

    Direct I/O

    small writes no need to re-write

    adjacent data

    large reads & writes

    passed down thestack without anyfragmentation

    may use anytablespace block sizewithout penalty

  • 8/2/2019 Fundamentals ASM

    13/56

    Direct I/O How To

    May need to

    set filesystemio_optionsparameter set file system mount options

    configure using operating system commands

    Depends on

    operating system platform

    file system type

  • 8/2/2019 Fundamentals ASM

    14/56

    Synchronous I/O

    Processes wait for I/O

    completion and results A process can only use

    one disk at a time

    For a series of I/Os tothe same disk

    the hardware cannotservice the requests inthe optimal order

    scheduling latencies

    DBWn write batch

  • 8/2/2019 Fundamentals ASM

    15/56

    Asynchronous I/O

    Can perform other tasks

    while waiting for I/O Can use many disks at

    once

    For a batch of I/Os tothe same disk

    the hardware canservice the requests inthe optimal order

    no scheduling latencies

    DBWn write batch

  • 8/2/2019 Fundamentals ASM

    16/56

    Asynchronous I/O How To

    Threaded asynchronous I/O simulation

    multiple threads perform synchronous I/O high CPU cost if intensively used

    only available on some platforms

    Kernelized asynchronous I/O

    must use raw devicesor a pseudo device driver product

    eg: Veritas Quick I/O, Oracle Disk Manager, etc

  • 8/2/2019 Fundamentals ASM

    17/56

    Striping Benefits

    Concurrency

    hot spots are spread over multiple disks whichcan service concurrent requests in parallel

    Transfer rate

    large reads & writes use multiple disk in parallel

    I/O spread

    full utilization of hardware investment important for systems relatively few large disks

  • 8/2/2019 Fundamentals ASM

    18/56

    Striping Fine or Coarse

    Concurrency coarse grain

    most I/Os should be serviced by a single disk caching ensures that disk hot spots are not small

    1 Mb is a reasonable stripe element size

    Transfer rate fine grain

    large I/Os should be serviced by multiple disks

    but very fine striping increases rotational latencyand reduces concurrency

    128 Kb is commonly optimal

  • 8/2/2019 Fundamentals ASM

    19/56

    Striping Breadth

    Comprehensive (SAME)

    all disks in one stripe ensures even utilization

    of all disks

    needs reconfiguration

    to increase capacity

    without a disk cachelog write performancemay be unacceptable

    Broad (SAME sets)

    two or more stripe sets one sets may be busy

    while another is idle

    can increase capacity

    by adding a new set

    can use a separate diskset to isolate log filesfrom I/O interference

  • 8/2/2019 Fundamentals ASM

    20/56

    Striping How To

    Stripe breadth

    broad (SAME sets) to allow for growth

    to isolate log file I/O

    comprehensive (SAME) otherwise

    Stripe grain

    choose coarse for high concurrency applications

    choose fine for low concurrency applications

  • 8/2/2019 Fundamentals ASM

    21/56

    Data Protection

    Mirroring

    only half the raw diskcapacity is usable

    can read from eitherside of the mirror

    must write to bothsides of the mirror

    Half the data capacity

    Maximum I/O capacity

    RAID-5

    parity data use thecapacity of one disk

    only one image fromwhich to read

    must read and writeboth the data and parity

    Nearly full data capacity

    Less than half I/O ability

    Data capacity is much cheaper than I/O capacity.

  • 8/2/2019 Fundamentals ASM

    22/56

    Mirroring Software or Hardware

    Software mirroring

    a crash can leave mirrors inconsistent complete resilvering takes too long

    so a dirty region logis normally needed

    enumerates potentially inconsistent regions

    makes resilvering much faster

    but it is a major performance overhead

    Hardware mirroring is best practice

    hot spare disksshould be maintained

  • 8/2/2019 Fundamentals ASM

    23/56

    Data Protection How To

    Choose mirroring, not RAID-5

    disk capacity is cheap I/O capacity is expensive

    Use hardware mirroring if possible

    avoid dirty region logging overheads

    Keep hot spares

    to re-establish mirroring quickly after a failure

  • 8/2/2019 Fundamentals ASM

    24/56

    Load Balancing Triggers

    Performance tuning

    poor I/O performance adequate I/O capacity

    uneven workload

    Workload growth

    inadequate I/O capacity

    new disks purchased

    workload must beredistributed

    Data growth

    data growth requiresmore disk capacity

    placing the new data onthe new disks would

    introduce a hot spot

  • 8/2/2019 Fundamentals ASM

    25/56

    Load Balancing Reactive

    Approach

    monitor I/O patterns and densities move files to spread the load out evenly

    Difficulties

    workload patterns may vary

    file sizes may differ, thus preventing swapping

    stripe sets may have different I/O characteristics

  • 8/2/2019 Fundamentals ASM

    26/56

    Load Balancing How To

    Be prepared

    choose a small, fixed datafile size use multiple such datafiles for each tablespace

    distribute these datafiles evenly over stripe sets

    When adding capacity

    for each tablespace, move datafiles pro-rata fromthe existing stripe sets into the new one

  • 8/2/2019 Fundamentals ASM

    27/56

    Automatic Storage Management

    What is ASM?

    Disk Groups Dynamic Rebalancing

    ASM Architecture ASM Mirroring

  • 8/2/2019 Fundamentals ASM

    28/56

    Automatic Storage Management

    New capability in the Oracle database kernel

    Provides a vertical integration of the file system andvolume manager for simplified management ofdatabase files

    Spreads database files across all available storage foroptimal performance

    Enables simple and non-intrusive resource allocation

    with automatic rebalancing Virtualizes storage resources

  • 8/2/2019 Fundamentals ASM

    29/56

    ASM Disk Groups

    A pool of disks managed as alogical unit

    Disk Group

  • 8/2/2019 Fundamentals ASM

    30/56

    ASM Disk Groups

    A pool of disks managed as alogical unit

    Partitions total disk space intouniform sized megabyte units

    Disk Group

  • 8/2/2019 Fundamentals ASM

    31/56

    ASM Disk Groups

    A pool of disks managed as alogical unit

    Partitions total disk space intouniform sized megabyte units

    ASM spreads each file evenly

    across all disks in a diskgroup

    Disk Group

  • 8/2/2019 Fundamentals ASM

    32/56

    ASM Disk Groups

    A pool of disks managed as alogical unit

    Partitions total disk space intouniform sized megabyte units

    ASM spreads each file evenly

    across all disks in a diskgroup

    Coarse or fine grain striping

    based on file type

    Disk Group

  • 8/2/2019 Fundamentals ASM

    33/56

    ASM Disk Groups

    A pool of disks managed as alogical unit

    Partitions total disk space intouniform sized megabyte units

    ASM spreads each file evenlyacross all disks in a diskgroup

    Coarse or fine grain stripingbased on file type

    Disk groups integrated withOracle Managed Files

    Disk Group

  • 8/2/2019 Fundamentals ASM

    34/56

    ASM Dynamic Rebalancing

    Automatic online rebalancewhenever storage

    configuration changes

    Disk Group

  • 8/2/2019 Fundamentals ASM

    35/56

    ASM Dynamic Rebalancing

    Automatic online rebalancewhenever storage

    configuration changes Only move data proportional

    to storage added

    Disk Group

  • 8/2/2019 Fundamentals ASM

    36/56

    ASM Dynamic Rebalancing

    Automatic online rebalancewhenever storage

    configuration changes Only move data proportional

    to storage added

    No need for manual I/O tuning

    Disk Group

  • 8/2/2019 Fundamentals ASM

    37/56

    ASM Dynamic Rebalancing

    Automatic online rebalancewhenever storage

    configuration changes Online migration to new

    storage

    Disk Group

    ASM D i R b l i

  • 8/2/2019 Fundamentals ASM

    38/56

    ASM Dynamic Rebalancing

    Automatic online rebalancewhenever storage

    configuration changes Online migration to new

    storage

    Disk Group

    ASM D i R b l i

  • 8/2/2019 Fundamentals ASM

    39/56

    ASM Dynamic Rebalancing

    Automatic online rebalancewhenever storage

    configuration changes Online migration to new

    storage

    Disk Group

    ASM D i R b l i

  • 8/2/2019 Fundamentals ASM

    40/56

    ASM Dynamic Rebalancing

    Automatic online rebalancewhenever storage

    configuration changes Online migration to new

    storage

    Disk Group

    ASM A hit t

  • 8/2/2019 Fundamentals ASM

    41/56

    ASM Architecture

    Pool of Storage

    ASM Instance

    NonRAC

    Database

    Server

    Oracle

    DB Instance

    Disk Group

    ASM A hit t

  • 8/2/2019 Fundamentals ASM

    42/56

    ASM Architecture

    ClusteredPool of Storage

    ASM Instance ASM Instance

    RAC

    Database

    ClusteredServers

    Oracle

    DB Instance

    Oracle

    DB Instance

    Disk Group

    ASM Architect re

  • 8/2/2019 Fundamentals ASM

    43/56

    ASM Architecture

    ClusteredPool of Storage

    ASM Instance ASM Instance

    RAC

    Database

    ClusteredServers

    Oracle

    DB Instance

    Oracle

    DB Instance

    Disk GroupDisk Group

    ASM Architecture

  • 8/2/2019 Fundamentals ASM

    44/56

    ASM Architecture

    ClusteredPool of Storage

    ASM InstanceASM Instance ASM Instance ASM Instance

    RAC orNonRAC

    Databases

    ClusteredServers

    Oracle

    DB Instance

    Oracle

    DB Instance

    Oracle

    DB Instance

    Oracle

    DB Instance

    Oracle

    DB Instance

    Disk GroupDisk Group

    ASM Mirroring

  • 8/2/2019 Fundamentals ASM

    45/56

    ASM Mirroring

    3 choices for disk group redundancy

    External: defers to hardware mirroring Normal: 2-way mirroring

    High: 3-way mirroring

    Integration with database removes need fordirty region logging

    ASM Mirroring

  • 8/2/2019 Fundamentals ASM

    46/56

    ASM Mirroring

    Mirror at extent level

    Mix primary & mirror extents on each disk

    ASM Mirroring

  • 8/2/2019 Fundamentals ASM

    47/56

    ASM Mirroring

    Mirror at extent level

    Mix primary & mirror extents on each disk

    ASM Mirroring

  • 8/2/2019 Fundamentals ASM

    48/56

    ASM Mirroring

    No hot spare disk required

    Just spare capacity

    Failed disk load spread among survivors

    Maintains balanced I/O load

    Conclusion

  • 8/2/2019 Fundamentals ASM

    49/56

    Conclusion

    Best practice is built into ASM

    ASM is easy ASM benefits

    performance

    availability

    automation

    Best Practice Is Built Into ASM

  • 8/2/2019 Fundamentals ASM

    50/56

    Best Practice Is Built Into ASM

    I/O to ASM files is direct, not buffered

    ASM allows kernelized asynchronous I/O ASM spreads the I/O as broadly as possible

    can have both fine and coarse grain striping

    ASM can provide software mirroring does not require dirty region logging

    does not require hot spares, just spare capacity

    When new disks are added ASM does loadbalancing automatically without downtime

    ASM is Easy

  • 8/2/2019 Fundamentals ASM

    51/56

    ASM is Easy

    You only need to answer two questions

    Do you need a separate log file disk group? intensive OLTP application with no disk cache

    Do you need ASM mirroring?

    storage not mirrored by the hardware

    ASM will do everything else automatically

    Storage management is entirely automated

    using BIGFILE tablespaces, you need nevername or refer to a datafile again

    ASM Benefits

  • 8/2/2019 Fundamentals ASM

    52/56

    ASM Benefits

    ASM will improve performance

    very few sites follow the current best practices ASM will improve system availability

    no downtime needed for storage changes

    ASM will save you time

    it automates a complex DBA task entirely

  • 8/2/2019 Fundamentals ASM

    53/56

    Q U E S T I O N SQ U E S T I O N S

    A N S W E R SA N S W E R S

    Next Steps

  • 8/2/2019 Fundamentals ASM

    54/56

    Next Steps.

    Automatic Storage Management Demo in the

    Oracle DEMOgrounds Pod 5DD

    Pod 5QQ

  • 8/2/2019 Fundamentals ASM

    55/56

    Reminder please complete the OracleWorldonline session survey

    Thank you.

  • 8/2/2019 Fundamentals ASM

    56/56


Recommended