+ All Categories

wzt5160

Date post: 02-Nov-2015
Category:
Upload: gbenga
View: 8 times
Download: 0 times
Share this document with a friend
Description:
SPARC T3 Based Servers Troubleshooting
43
 Slide 1 <Insert Picture Here>  Oracle  
Transcript
  • Slide 1

    Oracle

  • Slide 2

    WZT-5160

    SPARC T3 Based Servers Architecture and Features

    Welcome to the SPARC T3 Based Servers Architecture and Features module. This content will cover the overall architecture and major features of Oracles SPARC T3-1, SPARC T3-2, and SPARC T3-4 rack-mount servers along with Oracles SPARC T3-1B server blade.

  • Slide 3

    3

    Objectives

    Identify the features of the servers

    Explain the architecture of the servers

    List the servers FRUs and CRUs

    Upon completion of this module, you should be able to: 1Identify the features of the servers 2Explain their architecture and 3List their FRUs and CRUs

  • Slide 4

    4

    Additional Resources

    Press PLAY (4) to Continue

    Oracle, Inc., SPARC T3-1, T3-2 and T3-4 Servers Administration Guide,

    Revision A, June 2010, Part Number 821-2063-xx.

    Oracle, Inc., SPARC T3-1 Servers Installation Guide, Revision A, Oct 2010,

    Part Number 821-2062-xx.

    Oracle, Inc., SPARC T3-2 Servers Installation Guide, Revision A, Sept 2010,

    Part Number 821-1960-xx.

    Oracle, Inc., SPARC T3-4 Servers Installation Guide, Revision A, Aug 2010,

    Part Number 821-2115-xx.

    Oracle, Inc., SPARC T3-1B Servers Installation Guide, Revision A, Aug 2010,

    Part Number 821-1916-xx.

    Oracle, Inc., SPARC T3-1 Servers Service Manual, Revision A, Oct 2010,

    Part Number 821-2064-xx.

    Oracle, Inc., SPARC T3-2 Servers Service Manual, Revision A, Aug 2010,

    Part Number 821-1961-xx.

    Oracle, Inc., SPARC T3-4 Servers Service Manual, Revision A, Aug 2010,

    Part Number 821-2116-xx.

    The following references are 1the technical documentation for the SPARC T3 based server platforms.

  • Slide 5

    5

    Additional Resources

    Press PLAY (4) to Continue

    Oracle, Inc., SPARC T3-1B Servers Service Manual, Revision A, Sept 2010,

    Part Number 821-1918-xx.

    Oracle, Inc., SPARC T3-1 Servers Product Notes, Revision A, Oct 2010,

    Part Number 821-2059-xx.

    Oracle, Inc., SPARC T3-2 Servers Product Notes, Revision A, Aug 2010,

    Part Number 821-1964-xx.

    Oracle, Inc., SPARC T3-4 Servers Product Notes , Revision A, Aug 2010,

    Part Number 821-2112- xx.

    Oracle, Inc., SPARC T3-1B Servers Product Notes , Revision A, Aug 2010,

    Part Number 821-1914-xx.

    Oracle, Inc., Oracle Integrated Lights Out Manager (ILOM) 3.0 Web Interface

    Procedures Guide, Revision A, May 2010, Part Number, 820-6411-xx.

    Oracle, Inc., Oracle Integrated Lights Out Manager (ILOM) 3.0 CLI

    Procedures Guide, Revision A, May 2010, Part Number 821-6412-xx.

    Oracle, Inc., Oracle Integrated Lights Out Manager (ILOM) 3.0 SNMP and

    IPMI Procedures Guide, Revision A, May 2010, Part Number 820-6413-xx.

    The following references are 1the rest of the technical documentation for the SPARC T3 based server

    platforms.

  • Slide 6

    6

    Which are the Servers?

    SPARC T3-1 (Serpa)

    1 CPU socket

    SPARC T3-2 (Solana)

    2 CPU sockets

    SPARC T3-4 (Seville)

    4 CPU socketsSPARC T3-1B

    (Jumilla)

    1 CPU

    socket

    There are four servers that support the new SPARC T3 CPU chip. 1Oracles SPARC T3-1 Server, code

    named Serpa, is a 2U rack mount server that supports one CPU socket. 2Oracles SPARC T3-2 Server,

    code named Solana, is a 3U rack mount server that supports 2 CPU sockets. 3Oracles SPARC T3-4

    Server, code named Seville, is a 5U rack mount server that supports up to 4 CPU sockets. 4Oracles

    SPARC T3-1B Server, code named Jumilla is a server blade that supports one CPU socket and is supported on the Sun Blade 6000 Modular System.

  • Slide 7

    7

    Which is the CPU chip?

    NOTE: The manner in which the DIMMs and BoBs are packaged is platform

    dependentPress PLAY (4) to Continue

    All four servers support the SPARC T3 CPU chip. 1Each CPU chip has 16 cores and 8 threads per core

    for a total of 128 threads per chip that runs off of a 1.65 GHz clock. 2Each core has 16 KBytes of

    instruction cache, 8KBytes of data cache, a floating point and graphics unit along with a cryptographic accelerator.

    3All the cores share 6 MBytes of level 2 cache that is accessed through the crossbar circuitry.

    4The chips memory interface consists of two memory controllers which support two memory channels

    each. External to the chip, the memory channels interface to the DDR3 DIMMs through memory buffers referred to as Buffer on Board chips or BoBs. Each memory channel supports two BoBs and each BoB has two channels that support two DIMM slots each for a total of 16 DIMM slots. Populating the DIMM slot with 8 Gbyte DIMMs gives you a maximum of 128 Gbytes of memory per CPU chip. 5Note, the manner in which the DIMMs and BoBs are packaged is platform dependent so we will discuss

    each packaging scheme when we discuss the individual platforms later in this presentation.6

  • Slide 8

    8

    Press PLAY (4) to Continue

    Which is the CPU chip?

    Two coherence units, that provide 3 links each, are also available to provide shared memory paths out to other CPUs that may be on the server.

    1In terms of I/O, the CPU chip provides two 8 lane PCIe

    generation 2 interconnects 2and two XAUI 10 Gigabit Ethernet interfaces. These interfaces will be

    discussed in more detail later in this presentation.3

  • Slide 9

    9

    Which I/O is Supported?

    CPU

    and Memory

    Subsystem

    PCIe2 Switch 0

    IDT 48H12G2

    Partitionable

    (48 lane)

    PCIe2 Switch 1

    IDT 64H16G2

    Partitionable

    (64 lane)

    PCIe2 x8 PCIe2 x8

    Press PLAY (4) to Continue

    PCIe2 x8

    PCIe2 x1

    PCIe2 x4

    PCIe2 x8

    PCIe2 x4

    PCIe2 x1

    There are several industry standard I/O interconnects and buses supported by the SPARC T3 based servers.

    1The main I/O protocol used on the servers is PCIe generation 2 interconnects which originate at

    the CPUs and are expanded within the server 2using two PCIe2 switch chips.

    Depending on the server, the two PCIe2 switch chips used could be 48 or 64 lane types, as displayed here,

    3and can support up to 8 lane PCIe2 interconnects within the server architecture. The 4 and 8 lane

    interconnects support mainly PCIe2 ports while other I/O devices are supported directly or indirectly through the PCIe2 interconnects.

    4

  • Slide 10

    10

    Which I/O is Supported?

    Press PLAY (4) to Continue

    CPU

    and Memory

    Subsystem

    PCIe2 Switch 0

    IDT 48H12G2

    Partitionable

    (48 lane)

    10-Gbit

    Ethernet ports

    XAUI

    Ethernet

    Adapter(s)

    LSI SAS 2008

    SAS/SATA

    Controller

    RAID 0 and 1

    PCIe2 to-PCIBridge

    PCI-to-USB

    Bridge

    Intel

    28576 EB

    MAC/PHY

    REM(on SPARC T3-1B)

    PCIe2 Switch 1

    IDT 64H16G2

    Partitionable

    (64 lane)

    The PCIe2 interconnects also support other interconnects and buses, through the use of bridge chips and daughter cards,

    1such as LSI controller chip , USB controller chips, 1Gbit Ethernet controller chip and the

    RAID Expansion Module or REM board on the SPARC T3-1B blade server. The SPARC T3 CPUs also

    2provides two XAUI interfaces that are used to provide

    10-Gbit Ethernet ports through an Ethernet adapter slot. The number of ports and adapter slots are dependent on the server platform. Since the I/O infrastructure is different for each server platform we will defer that discussion to that section of this presentation.

    3

  • Slide 11

    11

    Server Features Comparison

    Press PLAY (4) to Continue

    Features SPARC T3-1B SPARC T3-1 SPARC T3-2 SPARC T3-4

    Size (RU) 1 slot blade 2RU 3RU 5RU

    Cores/Threads 16 / 128 (1 CPU) 16 / 128 (1 CPU) 32 / 256 (2 CPUs) 64 / 512 (4 CPUs)

    Memory (max) 128 GBytes 128 GBytes 256 GBytes 512 GBytes

    PCIe2 Slots 2 EM slots

    2 NEM ports

    6 10 16

    1 GbE / 10 GbE ports 2 / 2 4 / 2 4 / 4 4 / 8

    HDD or SSD (max) 4 16 6 8

    LDoms (recommended) 16 32 32 64

    Service Processor ILOM 3.0 ILOM 3.0 ILOM 3.0 ILOM 3.0

    Jumilla SerpaSolana

    Seville

    A summary of the server features as displayed in this table for a side by side comparison. A copy of this table is available in PDF format that you can download by selecting the Server Features Comparison entry under the attachment tab.

    1

  • Slide 12

    12

    Specific Server Architecture and Features

    Jumilla

    Serpa

    Solana

    SPARC T3-1B

    SPARC T3-1

    SPARC T3-2

    SPARC T3-4

    Seville

    Press PLAY (4) to Continue

    Now you have a choice to make. Select one of the four arrow icons. This will branch you to that specific servers architecture and features. Note, you will be given the option to cover all the other servers

    information later on.1

  • Slide 13

    13

    Front of the SPARC T3-1 Server

    Press PLAY (4) to Continue

    Front

    3

    2

    1

    0

    7

    6

    5

    4

    12

    11

    9

    8

    15

    14

    1310

    CD/DVD (2)USB

    System Indicators

    Locator LED (white) / Button Service Action Required LED (amber) Power/OK LED (green) Power Button

    Fault Indicators

    Power Supply Fault LED (amber) Temperature Threshold LED (amber) Fan Fault LED (amber) Disk slots label

    0

    1

    2

    3

    4

    5

    6 78-disk

    config

    CD/DVD (2)USB

    The SPARC T3-1 servers front panel has the following features. 1On the top left are the system

    indicators that consist of the white locator LED and button assembly used as a beacon to locate the server within a data center, an amber service action required LED that is on when a hardware failure has occurred within the server, a green power OK LED that is used to determine the power state of the server and the power button that is used to turn on the power to the servers host. 2There are 16 disk drive slots available on the SPARC T3-1 server that can support either 8 or 16 SAS or

    SATA hard disk drives or solid state disk drives, known as HDDs or SSDs. The numbers represent their slot number as well as their SAS or SATA IDs.

    3On the right of the faceplate above disk slots 10 and 13,

    you have the CD/DVD drive along with USB ports 2 and 3. 4On the far right are the fault indicators and

    the disk drive labels. The fault indicators consist of the power supply Fault LED, the amber temperature threshold LED and amber fan fault LED.

    5

  • Slide 14

    14

    Rear of the SPARC T3-1 Server

    Press PLAY (4) to Continue

    Rear

    Power Supplies

    1+1 redundancy hot swap 1100 watts @ 110VAC or1200 watts @ 220 VAC

    Gold+ efficiency rating

    NET MGT

    SER MGT

    System Indicators (duplicate)

    (2) USB

    Video port

    0

    31

    4

    2

    5

    10-GbE

    NET0 NET3

    0 1 2 3

    The SPARC T3-1 servers rear panel has the following features. 1On the left are two 1+1 redundant hot

    swappable power supplies that run on 110 or 220 VAC which has a gold+ efficiency rating. At 110 VAC it generates up to 1100 watts of power and at 220 VAC it generates up to 1200 watts of power. 2There are six slots that support PCIe2 adapters with PCIe2 slots 0 and 3 able to support XAUI adapters.

    Below PCIe2 slot 0, there is 3a serial and network management port that are connected to the service

    processor and a duplicate set of system indicators. The same indicators we went over on the front faceplate. To the right of the network management port

    4are four 1-Gbit Ethernet ports, numbered 0 to 3

    left to right, that are used by server host. To the right of these ports 5are USB ports 0 and 1 and a video

    port. These ports can be used to connect a console keyboard, mouse and a monitor.6

  • Slide 15

    15

    Inside the SPARC T3-1 Server

    Press PLAY (4) to Continue

    PCIe2

    riser

    PCIe2

    riser

    PCIe2

    riser

    PCIe2

    Adapter

    Slot 0 and 3

    PCIe2

    Adapter

    Slot 1 and 4

    PCIe2

    Adapter

    Slot 2 and 5

    power supply

    fan

    assemblies

    memory

    memory

    memory

    memory

    CPU

    disk

    backplane

    and slotsSAS/SATA cable harness

    SAS/SATA cable harness

    The key components of the SPARC T3-1 server includes 1the CPU, its memory and

    2the PCIe2 adapter

    cards within the PCIe2 risers. The other components are 3the fan assemblies,

    4the power supply

    subsystem, 5the disk backplane and slots along with the SAS/SATA cabling harness. To display a list of

    FRUs and CRUs click on the SPARC T3-1 FRUs and CRUs entry within the attachment tab. For a closer look at the inside of the server select SPARC T3-1 3-D entry from the attachment tab. Using the control panel that comes with it, you can see the server in any of 360 degree orientations , zoom in and out on components, and remove components to view other components within the enclosure.

    6

  • Slide 16

    16

    SPARC T3-1 CPU and Memory

    Press PLAY (4) to Continue

    BoB

    BoB

    BoB

    BoB

    SPARC T3

    DIMM DIMM DIMM DIMM

    DIMM DIMM DIMM DIMM

    DIMM DIMM DIMM DIMM

    DIMM DIMM DIMMDIMM

    A

    B

    C

    D

    Densities: 2, 4 or 8-Gbyte registered DIMMs Speed: 1066 MT/sec

    The SPARC T3-1 server has four memory channels each supporting a Buffer on Board or Bob. 1Each

    Bob has two channels that supports two DIMM slots. 2Each DIMM slot supports either a 2, 4 or 8-Gbyte

    registered DIMM 3that operate at 1066 Megatransfers per second. Note, the population and replacement

    rules will be covered in the Installation and Replacement module of this course.4

  • Slide 17

    17

    SPARC T3-1 I/O Subsystem

    Press PLAY (4) to Continue

    Memory

    SubsystemSPARC

    T3

    x8B3

    x8

    B5

    x8B1

    B3 PCIe2 port 3 (x8)

    PCIe2 port 1 (x16)

    PCIe2 port 5 (x8)

    PCIe2 port 0 (x16)

    PCIe2 port 4 (x8)

    PCIe2 port 2(x16)B1

    B5

    x8A2

    x8

    A4

    x8A0

    A0

    A4

    A2

    PCIe2 x8

    PCIe2 x8PCIe2

    Switch 0

    48 ports

    PCIe2

    Switch 1

    64 ports(2) XAUI

    XAUI

    XAUI

    x4

    Intel

    82576EB

    MAC/PHY

    1

    x4

    Intel

    82576EB

    MAC/PHY

    0

    Quad 1-GbE

    USB

    Hub

    PCI / USB

    Bridge

    NEC

    uPD720101

    USB

    Hub

    Front side

    USBs

    Rear side

    USBs

    Internal

    USB

    PCIe2 / PCI

    Bridge

    PLX

    PEX8112

    x1

    C

    C

    x1

    Service

    Processor

    SERMGT NETMGT

    MAX3241 BCM5221 video

    disk1

    disk3

    disk5

    disk7

    disk9

    disk11

    disk13

    disk15

    disk0

    disk2

    disk4

    disk6

    disk8

    disk10

    disk12

    disk14

    SAS/SATA

    Expander

    LSI SAS2008

    SAS/SATA

    Controller 0

    RAID 0,1,0/1

    LSI SAS2008

    SAS/SATA

    Controller 0

    RAID 0,1,0/1

    A

    B

    disk0

    disk1

    disk2

    disk3

    disk4

    disk5

    disk6

    disk7

    SAS/SATA

    Expander

    A

    B

    x8 x8

    DVD

    4

    4

    SATA

    The I/O subsystem of the SPARC T3-1 servers originates directly from the SPARC T3 CPU through 1two

    PCIe2 8 lane interconnects that interface to a 48 port and an 64 port PCIe2 switch. 2Each PCIe2 switch

    supports 3 PCIe2 slots. All six slots support 8 lane PCIe2 adapters. Slots 0,1 and 2 can also support 16 lane PCIe2 adapters, physically but not electrically.

    3Also, slots 0 and 3 are connected to the XAUI

    interfaces of the CPU chip and can support 10-Gigabit Ethernet ports. 4Other PCIe2 8 lane interconnects support the internal SAS or SATA disk drives through two LSI

    SAS/SATA controllers. Depending on disk backplane and the cable harness used, either 8 or 16 disks can be supported through a primary and a secondary path. One of the controllers also supports the CD/DVD drive. 5The 64 port PCIe2 switch provides two 4 lane PCIe2 interconnects that supports two Ethernet controllers

    that hosts four 1-Gbit Ethernet ports located on the rear of the server. 6The 64 port PCIe2 switch also

    supports the servers five USB ports and 7the service processors interface to the CPU through two

    separate one lane PCIe2 interconnects.8

  • Slide 18

    18

    Front of the SPARC T3-2 Server

    Press PLAY (4) to Continue

    Fan baffle

    System Indicators and buttons:

    Locator LED/Button assembly (white) Service Action Required LED (amber) OK/Power LED (green) Power button Fault indicators: SP, Fan, CPU, Memory, Power supplies and temperature

    USB 2 & 3

    video portCD/DVD drive

    Disk Slots for

    (6) SAS/SATA disks

    012345

    The SPARC T3-2 servers front panel has the following features. 1On the top left are the system

    indicators that consist of the white locator LED and button assembly used as a beacon to locate the server within a data center, an amber service action required LED that is on when a hardware failure has occurred within the server, a green power OK LED that is used to determine the power state of the server and the power button that is used to turn on the power to the servers host. For hardware failure isolation there is amber fault LED for the service processor, fan module, CPU, memory, power supplies and temperature threshold. 2There are 6 disk drive slots available on the SPARC T3-2 server that can support SAS or SATA hard

    disk drives or solid state disk drives, better known as HDDs and SSDs. The numbers represent their slot number as well as their SAS or SATA IDs.

    3On the bottom right of the faceplate you have the CD/DVD

    drive while on 4the bottom left are USB ports 2 and 3 and the video port.

    5

  • Slide 19

    19

    Rear of the SPARC T3-2 Server

    Press PLAY (4) to Continue

    Rear

    Power Supplies

    N+1 redundancy hot swap 2000 watts @ 220 VAC

    System Indicators (duplicate)

    0 31 2 4 5 86 7 9

    Network Module (NM) slot

    for a Quad 10-GbE adapter

    SER MGT

    video

    NET MGT

    Quad 1-GbE ports

    NET0 NET3

    (2) USB

    0 1

    2 3

    The SPARC T3-2 servers rear panel has the following features. 1On the left are two N+1 redundant hot

    swappable power supplies that run on 220 VAC which can generate up to 2000 watts of power. 2These 10 slots support PCIe2 adapters. Every slot supports 8 lane adapters except for slots 4 and 5

    which support 4 lane PCIe adapters. 3A dedicated slot, referred to as the network module slot or NM, is

    able to support a Quad 10-Gigabit Ethernet adapter. Between the power supplies and PCIe2 slot 0 are 4a

    duplicate set of indicators we went over on the front faceplate. 5A serial management port, a network management port and a video port, that are connected to the

    service processor, are located between the NM slot and PCIe2 slot 5. Below the management ports are 6four host based 1-Gigabit Ethernet ports and USB ports 0 and 1. The video port and two USB ports can

    be used to connect a monitor, keyboard and mouse for the system console when needed.7

  • Slide 20

    20

    Inside of the SPARC T3-2 Server

    Press PLAY (4) to Continue

    Fan

    Modules

    Power Supply

    PCIe2 adapter

    slots

    and

    NM slot

    Disk

    Drive

    slots

    Memory Riser Board 1

    CPU1

    CPU0

    Filler

    Filler

    Filler

    Filler

    Memory Riser Board 0

    Memory Riser Board 1

    Memory Riser Board 0

    The key components of the SPARC T3-2 server includes 1two CPUs and up to two memory riser boards

    per CPU. Notice that the open board slots need a filler to avoid disruption of the cooling paths. 2The

    PCIe2 adapter slots and the network module slot are located towards the rear of the motherboard. 3The

    disk drive slots and power supplies are located on the right side of the server chassis. To the right of the memory riser boards are

    4the fan assemblies that provide the cooling for all of the servers internal

    components. To display a list of FRUs and CRUs click on the SPARC T3-2 FRUs and CRUs entry within the attachment tab. For a closer look at the inside of the server select SPARC T3-2 3-D entry from the attachment tab. Using the control panel that comes with it, you can see the server in any 360 degree orientation , zoom in and out on components, and remove components to view other components within the enclosure.

    5

  • Slide 21

    21

    SPARC T3-2 CPU and Memory

    Press PLAY (4) to Continue

    Densities: 2, 4 or 8-Gbyte registered DIMMs Speed: 1066 MT/sec

    CPU0 CPU1

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    Memory Riser Board 1

    Memory Riser Board 0

    Memory Riser Board 1

    Memory Riser Board 0

    A B

    D

    CD

    C

    A B

    SPARC

    T3

    SPARC

    T3

    The SPARC T3-2 server has four memory channels per CPU with each channel supporting a Buffer on Board or BoB.

    1Each BoB has two channels that supports two DIMM slots.

    2The memory subsystem for

    each CPU is packaged in two memory riser cards with each supporting two BoBs and eight DIMM slots. 3Each DIMM slot supports either a 2, 4 or 8-Gbyte registered DIMM

    4that operate at 1066 Megatransfers

    per second. Note, the population and replacement rules will be covered in the Installation and Replacement module.

    5

  • Slide 22

    22

    SPARC T3-2 I/O Subsystem

    Press PLAY (4) to Continue

    USB

    Hub

    PCI / USB

    Bridge

    NEC

    uPD720101

    USB

    Hub

    Front side

    USBs

    Rear side

    USBs

    Internal

    USB

    PCIe / PCI

    Bridge

    PLX

    PEX8112

    x1x4

    Intel

    82576EB

    MAC/PHY

    1

    x4

    Intel

    82576EB

    MAC/PHY

    0

    Quad 1-GbE

    SPARC

    T3

    SPARC

    T3

    Memory

    Subsystem

    Memory

    Subsystem

    CPU0

    CPU1

    disk0

    disk1

    disk2

    disk3

    disk4

    disk5

    SAS/SATA

    Expander

    LSI SAS2008

    SAS/SATA

    Controller 0

    RAID 0,1,0/1

    x8

    4

    4

    DVD

    SATA

    PCIe x8

    PCIe x8

    E

    PCIe x8

    PCIe

    Switch 1

    64 ports

    D

    E

    PCIe x8

    D

    PCIe

    Switch 0

    64 ports

    C

    C

    Service

    Processor

    SERMGT NETMGT

    MAX3241 BCM5221 video

    x1

    (2) XAUI

    (2) XAUI

    NM Quad XAUI

    (4) XAUI

    A1x8

    x8

    A3

    x8A0

    x8

    A3A4

    x8

    A3

    PCIe port 3 (x8)PCIe port 1 (x8)

    PCIe port 5 (x4)

    PCIe port 0 (x8)

    PCIe port 4 (x4)

    PCIe port 2(x8)

    A1

    B5

    A0

    A4

    A2

    B7

    PCIe port 7 (x8)

    PCIe port 9 (x8)PCIe port 8 (x8)

    PCIe port 6(x8)

    B9B8

    B6

    x8B8

    x8

    B7x8

    B5

    x8B9

    x8

    B6

    The I/O subsystem of the SPARC T3-2 server originates directly from the SPARC T3 CPUs through 1four

    PCIe2 8 lane interconnects that interface to the two 64 port PCIe2 switches to create I/O paths from each CPU to both switches.

    2Each PCIe2 switch supports 5 PCIe2 slots. Eight of the ten slots support 8 lane

    PCIe2 adapters while slot 4 and 5 which support 4 lane PCIe2 adapters. 3Also, the XAUI interfaces of the

    CPU chips are connected to a network module slot that supports an 10-Gigabit Ethernet adapter with up to 4 ports. 4Other PCIe2 8 lane interconnects support eight internal SAS or SATA disk drives through a LSI

    SAS/SATA controller using a primary and a secondary path to each device. The LSI controller also supports the SATA CD/DVD drive. 5One of the 64 port PCIe2 switches provides two 4 lane PCIe2 interconnects that supports two Ethernet

    controllers that hosts four 1-Gbit Ethernet ports located on the rear of the server. 6The same 64 port

    PCIe2 switch also supports the servers five USB ports and 7the service processors interface to the CPU

    through two separate one lane PCIe2 interconnects.8

  • Slide 23

    23

    Front of the SPARC T3-4 Server

    Press PLAY (4) to Continue

    PM1 CPU2 & 3

    PM0 CPU 0 & 1

    A. System Indicators and buttons:

    Locator LED/Button assembly (white) Service Action Required LED (amber) OK/Power LED (green) Power button Fault indicators: temperature , Fan, and EMs

    A

    C

    C. Ports

    video NET MGT (2) USB

    B

    B. Storage

    (8) SAS/SATA disks

    01

    23

    45

    67

    D. Power Supplies:

    2+2 redundancy 200 -240 VAC/unit 2000 watts/unit

    D0 1 2 3

    The unique feature of the SPARC T3-4 server it that all its components are accessible from either the front or rear of the unit. The SPARC T3-4 servers front panel features are displayed.

    1The CPU and memory is packaged on two

    processor modules that contain 2 CPU sockets and 32 DIMM sockets each. 2On the left are the system

    indicators that consist of the white locator LED and button assembly used as a beacon to locate the server within a data center, an amber service action required LED that is on when a hardware failure has occurred within the server, a green power OK LED that is used to determine the power state of the server and the power button that is used to turn on the power to the servers host. For hardware failure isolation there is amber fault LED for the temperature threshold, fan module and Express Modules. Note, each system module has their own set of indicators for further isolation. 3There are 8 disk drive slots available on the SPARC T3-4 servers main module that support SAS or

    SATA hard disk drives or solid state disk drives, known as HDDs and SSDs. The numbers represent their slot number as well as their SAS or SATA IDs.

    4To the left of the main module are the video port, network

    management port and the two USB ports. 5Finally, on the bottom of the server chassis are the 4 power supply slots. When populated, the server

    supports a 2+2 power supply redundancy with individual supplies that run at 200 to 240 VAC and can generate up to 2000 watts of power.

    6

  • Slide 24

    24

    Rear of the SPARC T3-4 Server

    Press PLAY (4) to Continue

    PM0

    CPU0

    PM1

    CPU2

    PM0

    CPU1

    PM1

    CPU3

    The SPARC T3-4 servers rear panel has the following features. The 16 slots that support PCIe2 adapters are referred to as Express Modules or EMs.

    1Assuming both processor modules are installed,

    the EM slots are directly accessible by the individual CPUs as you see here.2

  • Slide 25

    25

    Rear of the SPARC T3-4 Server

    Press PLAY (4) to Continue

    0 1 2 3 4

    0

    1

    2

    3

    PM0

    CPU0

    PM0

    CPU1

    If processor module 1 is not present and a PM filler is in its place, the two remaining CPUs from PM0 directly access the EM slots as you see here. So, for proper operation of the server PM0 needs to be present in a two or four CPU configuration. 1On the top are the fan assembles that allows for front-to-rear air flow that cools the internal components

    of the server. 2On the left of the chassis are the four AC power connectors that provides power to the four

    power supplies in the front chassis.3

  • Slide 26

    26

    Rear of the SPARC T3-4 Server

    Press PLAY (4) to Continue

    System Indicators10-GbE ports 4-7

    10-GbE ports 0-3 SER MGT

    NET MGT

    1-GbE NET 0-3

    3

    1

    2

    0

    To the right of the AC connectors is the rear I/O module, 1referred to as RIO, which contains the system

    indicators and ports. 2The system indicators are a duplicate of the ones on the front chassis.

    3The ROI

    also supports 10-Gigabit Ethernet ports, that originate from the CPUs, through QSFP connections. 4It also

    supports 1-Gigabit Ethernet ports, that originate on the onboard Kawela controller chips and are controlled by the service processor.

    5The other two RJ-45 connectors support the serial and network

    management interfaces to the service processor or SP. To display a list of FRUs and CRUs click on the

    SPARC T3-4 FRUs and CRUs entry within the attachment tab. For a closer look at the inside of the server select SPARC T3-4 3-D entry from the attachment tab. Using the control panel that comes with it, you can see the server in any 360 degree orientation , zoom in and out on components, and remove components to view other components within the enclosure.

    6

  • Slide 27

    27

    Inside of the SPARC T3-4 PM

    Press PLAY (4) to Continue

    CPU0

    CPU1

    CPU0 memory

    CPU0 memory

    CPU0 memory

    CPU0 memory

    CPU1 memory

    CPU1 memory

    CPU1 memory

    CPU1 memory

    The key components of the SPARC T3-4 processor module includes two CPUs and their corresponding memory, as you can see here.

    1

  • Slide 28

    28

    Inside of the SPARC T3-4 Main Module

    Press PLAY (4) to Continue

    SP REM0

    REM1

    disks

    disks

    The key components of the SPARC T3-4 main module includes two RAID Expansion Modules, 8 disks and the service processor, as you can see here.

    1

  • Slide 29

    29

    SPARC T3-4 Processor Module 0

    Press PLAY (4) to Continue

    CPU0 CPU1

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    A B

    D

    CD

    C

    A B

    SPARC

    T3

    SPARC

    T3

    The SPARC T3-4 server has two identical processor modules. PM0 contains two CPU chips, CPU0 and CPU1, with four memory channels per CPU with each channel supporting

    1a Buffer on Board or BoB.

    Each BoB has two channels that supports two DIMM sockets for a total of 32 DIMM sockets within PM0.2

  • Slide 30

    30

    SPARC T3-4 Processor Module 1

    Press PLAY (4) to Continue

    CPU2 CPU3

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    DIMM DIMM

    DIMM DIMM

    BoB

    A B

    D

    CD

    C

    A B

    SPARC

    T3SPARC

    T3

    Densities: 2, 4 or 8-Gbyte registered DIMMs Speed: 1066 MT/sec

    PM1 contains CPU2 and CPU3 along with 32 DIMM sockets. 1Each DIMM socket supports either a 2, 4

    or 8-Gbyte registered DIMM 2with speeds of up to 1066 Megatransfers per second. Note, the population

    and replacement rules will be covered in the Installation and Replacement module.3

  • Slide 31

    31

    SPARC T3-4 CPU and I/O

    memory memory

    PM0

    CPU0 CPU1

    memory memory

    PM1

    CPU2 CPU3

    PCIe2

    Switch 0

    PCIe2

    Switch 2

    PCIe2

    Switch 1

    PCIe2

    Switch 3

    Press PLAY (4) to Continue

    The connection scheme between the CPU and the I/O was designed to get the most redundancy in the system. Notice that each PM has access to all four PCIe2 switches and thus have access to the complete

    I/O subsystem. Each CPU also has a coherency link to each of the other CPUs, as you see here.1

  • Slide 32

    32

    SPARC T3-4 CPU and I/O

    memory memory

    PM0

    CPU0 CPU1

    PCIe2

    Switch 0

    PCIe2

    Switch 2

    PCIe2

    Switch 1

    PCIe2

    Switch 3

    PM1 Filler

    Module

    Press PLAY (4) to Continue

    So if PM1 is removed and replaced with a filler card, PM0 will still have access to the complete I/O subsystem. The filler module serves to maintain air flow to the other components in the server but also

    provides a pass-thru connection on the coherency connection between CPU0 and CPU1 for PM0.1

  • Slide 33

    33

    SPARC T3-4 CPU Main I/O Board

    Press PLAY (4) to Continue

    (5)

    PCIe2 x8

    to slots

    PCIe2

    SW2

    0

    PCIe2

    SW2

    1

    PCIe2

    SW0

    2

    PCIe2

    SW2

    3

    PCIe2

    SW2

    4

    PCIe2

    SW0

    5

    PCIe2

    SW2

    6

    PCIe2

    SW0

    7

    PCIe2

    SW1

    8

    PCIe2

    SW3

    9

    PCIe2

    SW1

    10

    PCIe2

    SW3

    11

    PCIe2

    SW1

    12

    PCIe2

    SW1

    13

    PCIe2

    SW3

    14

    PCIe2

    SW1

    15

    (3)

    PCIe2 x8

    to slots

    (5)

    PCIe2

    8 lane

    (3)

    PCIe2

    8 lane

    USB

    Hub

    PCI / USB

    Bridge

    NEC

    uPD720101

    USB

    Hub

    Front side

    USBs

    (FIO)

    Rear side

    USBs

    (RIO)Internal

    USB

    PCIe / PCI

    Bridge

    PLX

    PEX8112

    PCIe2 x1

    Service

    Processor

    SERMGT NETMGT

    MAX3241 BCM5221 video

    PCIe2 x1

    Eight 10-GbE ports

    CPU0

    XAUI

    CPU1

    XAUI

    CPU2

    XAUI

    CPU3

    XAUI

    10-Gbit

    Network

    Board

    HDD 0 -3

    REM1

    BP

    HDD 4-7

    PCIe2 x8

    BP

    REM0

    PCIe2 x8 PCIe2

    Switch 2

    PCIe2

    Switch 1

    PCIe2

    Switch 3

    PCIe2

    Switch 0

    Intel

    82576EB

    MAC/PHY

    1

    Intel

    82576EB

    MAC/PHY

    0

    Quad 1-GbE

    PCIe2 x4

    PCIe2 x4

    Kawela Kawela

    The four PCIe2 64 port switches support 1the 16 PCIe2 8 lane slots that are referred to as Express

    Modules. 2PCIe2 switch 0 and 3 each connects to a RAID Expansion Module. Each REM supports a four

    slot backplane that can house hard disk drives or solid state drives, referred to as HDDs and SSDs. 3PCIe2 Switch 0 also supports five USB connectors through a one lane PCIe2 connection. Two in the

    front I/O assembly, two in the rear I/O board and one on the main I/O module. 4The four 1-Gigabit

    Ethernet ports on the RIO is supported through two Kawela Ethernet controllers that is connected through two four lane PCIe2 interconnects,

    5while a one lane PCIe2 interconnect supports the service processor.

    6There are also eight 10-Gigabit Ethernet ports, that are supported directly by the XAUI interfaces on

    each of the CPUs, that are resident on the 10-Gigabit Ethernet Network Board.7

  • Slide 34

    34

    Front of the SPARC T3-1B

    Press PLAY (4) to Continue

    01

    23

    (2)USBvideoSerial

    management

    port

    P/N 530-3936

    Option #4622A

    System Indicators and buttons:

    Locator LED/Button assembly (white) Ready-to-Remove LED (blue) Service Action Required LED (amber) OK/Power LED (green) Power button Non-maskable Interrupt button (NMI)

    The SPARC T3-1B server blade is supported on the Sun Blade 6000 chassis. As with all the Sun Blade 6000 server blades, it contains some standard features.

    1The system indicators consist of the white

    locator LED and button assembly used as a beacon to locate the server within a data center, a blue ready-to-remove LED that is on when the server blade is ready to be removed from its slot, an amber service action required LED that is on when a hardware failure has occurred within the server blade, a green power OK LED that is used to determine the power state of the server and the power button that is used to turn on the power to the servers host. Also, a non-maskable interrupt button that can be used to send a non-maskable interrupt to the CPU. 2The server blade has four hard disk drive or solid state drive slots, referred to as HDDs and SSDs. All

    the server blades ports come from 3the Universal Connector Port that requires an octopus cable, referred

    to as a dongle cable, to access its serial management port, its two USB ports and the video port.4

  • Slide 35

    35

    Inside of the SPARC-1B

    Press PLAY (4) to Continue

    REM

    FEM

    SPCPU

    memory

    memory

    memory

    memory DiskSlots 0 &1

    Disk

    Slots 2 & 3

    The key components of the SPARC-1B server blade includes 1a single CPU and up to 16 DIMM slots.

    2

    The RAID Expansion Module or REM supports

    3the four disk slots that are at the front of the

    server blade. Another daughter board is 4the Fabric Expansion Module or FEM that supports the hosts

    network ports through a Network Expansion Module or NEM. 5This server blade also has a service

    processor, known as the SP, that can be easily replaced as we will see later in this course. To display a list of FRUs and CRUs click on the SPARC-1B FRUs and CRUs entry within the attachment tab. For a closer look at the inside of the server select SPARC T3-1B 3-D entry from the attachment tab. Using the control panel that comes with it, you can see the server in any 360 degree orientation , zoom in and out on components, and remove components to view other components within the enclosure.

    6

  • Slide 36

    36

    SPARC T3-1B CPU and Memory

    Press PLAY (4) to Continue

    BoB

    BoB

    BoB

    BoB

    SPARC T3

    DIMM DIMM DIMM DIMM

    DIMM DIMM DIMM DIMM

    DIMM DIMM DIMM DIMM

    DIMM DIMM DIMMDIMM

    A

    B

    C

    D

    Densities: 2, 4 or 8-Gbyte registered DIMMs Speed: 1066 MT/sec

    The SPARC T3-1B server blade has four memory channels each supporting a Buffer on Board or Bob. 1Each Bob has two channels that supports two DIMM slots.

    2Each DIMM slot supports either a 2, 4 or 8-

    Gbyte registered DIMM 3that operate at 1066 Megatransfers per second. Note, the population and

    replacement rules will be covered in the Installation and Replacement module of this course.4

  • Slide 37

    37

    SPARC T3-1B I/O Subsystems

    Press PLAY (4) to Continue

    Memory

    SubsystemSPARC

    T3

    PCIe2 x8

    PCIe2 x8PCIe2

    Switch 0

    48 ports

    PCIe2

    Switch 1

    48 ports

    C

    C

    x1

    Service

    Processor

    SERMGT NETMGT

    MAX3241 BCM5221 video

    x8B1

    x8B0

    x8A1

    x8A0

    A0 A1 B0 B1

    EM0 NEM0 EM1 NEM1

    disk0

    disk1

    disk2

    disk3

    REM

    x8

    NEM0 NEM1

    SAS2SAS2

    x4

    1-GbE

    Intel

    82576EB

    MAC/PHY

    0

    1-GbE

    NEM0 NEM1

    (2) XAUI

    XAUI XAUI

    NEM0 NEM1

    PCI / USB

    Bridge

    NEC

    uPD720101

    UCP

    USBs

    Internal

    USB

    PCIe2 / PCI

    Bridge

    PLX

    PEX8112

    x1

    USB

    Hub

    The I/O subsystem of the SPARC T3-1B server blade originates directly from the SPARC T3 CPU through

    1two PCIe2 8 lane interconnects that interface to two 48 port PCIe2 switches. Each PCIe2 switch

    provides 2two 8 lane interconnects that are routed to the NEMs and EMs, as you can see here.

    3The CPU

    also provides two XAUI interfaces that are routed to NEM slots to produce 10-Gigabit Ethernet ports. 4The

    onboard Ethernet controller generates 1-Gigabit Ethernet interfaces that is also routed to the NEM slots PCIe2 switch 0 also supports

    5the RAID Expansion Module or REM and the onboard LSI disk controller

    through individual 8 lane interconnects. The REM supports the four onboard disk slots that can house either hard disk drives or solid state drives, referred to as HDDs and SSDs. 6PCIe2 switch 1 supports three USB ports through a one lane interconnect. Two USBs through the UCP

    and the other internally on the server motherboard. 7The service processor is also connected through a

    one lane interconnect.8

  • Slide 38

    38

    Specific Server Architecture and Features

    Press PLAY (4) to Continue

    Return

    You have just completed the specific architecture and features of one of the servers. If you want to cover another of the servers click on the return icon. If not, click on the play button at the bottom of the screen.

    1

  • Slide 39

    39

    Software Stack

    Press PLAY (4) to Continue

    NOTE: Solaris 10 Update 8 coupled with MU9 was added and is

    now the minimum supported OS version.

    The SPARC T3-1, 2, 4 and -1B servers run Solaris 10 and Open Solaris. These servers will release with a minimum of Solaris Update 9 running the Sun4v architecture. As you can see, the software stack includes 1the management software OBP and ILOM along with hypervisor which covers the hardware platform and

    presents a virtual platform to the Solaris OS. ILOM runs as a application of an embedded Linux OS. 2Note, the IPMI software is embedded within ILOM but you need the IPMITool user interface software to

    be able to access its CLI. It comes under Solaris 10 within /usr/sfw/bin directory. Also note, that Fault Management Architecture Event Transport Module is also embedded with ILOM and assists in error data collection for the OS level FMA. 3POST provides hardware host platform testing that verifies that the host system is capable of booting

    and running the Solaris OS. It runs on initial power on of the server as well as after a system error. More information about this software stack will be covered throughout this course.

    4Note, Solaris 10 Update 8

    coupled with Maintenance Update 9 was added and is now the minimum supported OS version.5

  • Slide 40

    PROPERTIES

    On passing, 'Finish' button: Goes to Next Slide

    On failing, 'Finish' button: Goes to Next Slide

    Allow user to leave quiz: At any time

    User may view slides after quiz: At any time

    User may attempt quiz: Unlimited times

  • Slide 41

    41

    Summary

    Identify the features of the servers

    Explain the architecture of the servers

    List the servers FRUs and CRUs

    Now that you completed this module, you should be able to 1identify and explain the features and architecture of

    the servers. You should also be able to 2list its FRUs and CRUs. This completes this module.

  • Slide 42

    42

    Oracle is the Information Company

  • Slide 43

    43

    Oracle