+ All Categories
Home > Documents > Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in...

Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in...

Date post: 10-Jun-2020
Category:
Upload: others
View: 5 times
Download: 0 times
Share this document with a friend
60
Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom, Computercom, and Short Range and Experimental Optical Networks (Tutorial) March 2012 Alan Benner, [email protected] IBM Corp. – Sr. Technical Staff Member, Systems & Technology Group InfiniBand Trade Assoc. – Chair, ElectroMechanical Working Group OFC/NFOEC Technical Digest © 2012 OSA ©Optical Society of America
Transcript
Page 1: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial –Category 14. Datacom, Computercom, and Short Range and Experimental Optical Networks (Tutorial)

March 2012

Alan Benner, [email protected] Corp. – Sr. Technical Staff Member, Systems & Technology GroupInfiniBand Trade Assoc. – Chair, ElectroMechanical Working Group

OTu2B.4.pdf 1 1/23/2012 11:50:18 AM

OFC/NFOEC Technical Digest © 2012 OSA

©Optical Society of America

Page 2: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

2

GOALS OF THIS TUTORIAL

Review optical interconnect from a systems architecture point of view

Interconnect basics: What’s important, what’s not – future system needs

Data Centers: Infrastructure and Networking

HPC Systems / Supercomputer Systems

Review of some interesting research programs and progress

The rest of the decade – where are the challenges?

OTu2B.4.pdf 2 1/23/2012 11:50:18 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 3: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

3

High-End computing systems: Steady Exponential Performance Growth

System-level improvements will continue, at faster than Moore’s-law rateSystem performance comes from aggregation of larger numbers of chips & boxes

Bandwidth requirements must scale with system, roughly 0.5B/FLOP (memory + network)Receive an 8 Byte word, do ~32 ops with it, then transmit it onward 16B / 32 OperationsActual BW requirements vary by application & algorithm by >10x : 0.5B/FLOP is an average

Chip Trend: ~50-60% (2x/18 mo.)

Parallel System Trend: (~95%)= CPU trend + more parallelism

http://www.top500.org/blog/2011/11/19/38th_top500_list_slides_sc11_are_now_available

~2020

Exa-

Transistors & Pkg:15%-20% CAGR,slowing

Box:70-80% CAGR,continuing

Uniprocessor:50% CAGR,slowing

Time (linear)

Perf

orm

ance

(log)

Cluster/Parallel:~95-100% CAGR,continuing

System Level Performance

CAGR = Compound Annual Growth Rate

Note: Top500’s Linpack needs moderate network performanceSimilar trends & growth rates apply to data centers.

Chips: 100x / decadeSystem: 1,000x / decade

OTu2B.4.pdf 3 1/23/2012 11:50:18 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 4: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

4

Optical Interconnect - Basics

OTu2B.4.pdf 4 1/23/2012 11:50:18 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 5: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

5

The Landscape of Interconnect

LaterAfter 20152012-2015Since 2010-2011Since late 00’sSince 90sSince 80sUse of optics

1 - 100s1 - 100s1 - 100s1 - 100s1 - 10s1 - 10s1Typical # lanes per link

0 mm- 20 mm

5 mm- 100 mm

0.1 m- 0.3 m

0.3 m- 1 m

1 m- 10 m

10,- 300 m

Multi-kmLength

Distinguished by

Length & Packaging

Intra-chipIntra-ModuleIntra-CardBackplane / Card-to-Card

Cables – ShortCables – LongMAN & WANPHYSICAL Link Types

Reliability & cost vs. DRAM

Reliability, massive BW,

reliability

ReliabilityShared tech between servers & desktops

Shared tech. between servers & desktops

Dominated by FC

BW & latency to <60 meters

100-300m over RJ-45 /

CAT5 cabling, or wireless

Inter-operability

with “Everybody”

Key Characteristic

Maybe Never? (Wireless,

Building re-wiring, BW demand)

Traffic: HTML pages to laptops,..

Stds: 1G Ethernet, WiFi

Local Area Network

Coming

Traffic: Load/store coherency ops to other CPUs’cachesStds: Hyper-transport

SMP Coherency Bus

Coming laterNot yetScatteredNot yetSince 90sSince 2000sSince 80sUse of optics

Traffic: Load/Store to DRAM or Memory Fanout chipStds: DDR3/2/.

Traffic: Load/store to Hubs & bridges

Stds: Hyper-Transport

Traffic: Load/store to I/O adapters

Stds: PCI/PCIe

Traffic: Read/Write to disk, unshared

Stds: SAS, SATA

Traffic: Read/Write to disk, shared

Std: Fibre Channel

Traffic: Intra-application, or intra-distributed-application Stds: InfiniBand, 1G Ethernet, 10/40/100Enet

Traffic: IP

Stds: Ethernet, ATM, SONET,

Distinguished by

Function & Link Protocol

Memory BusMezzanine Bus

I/ODirect Attach Storage

Storage Area

Network

Cluster / Data Center

InternetLOGICAL Link Types

Link Technology Single-mode Optics Mixed multi-mode optics & copper Copper

OTu2B.4.pdf 5 1/23/2012 11:50:18 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 6: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

6

The Landscape of Interconnect

LaterAfter 20152012-2015Since 2010-2011Since late 00’sSince 90sSince 80sUse of optics

1 - 100s1 - 100s1 - 100s1 - 100s1 - 10s1 - 10s1Typical # lanes per link

0 mm- 20 mm

5 mm- 100 mm

0.1 m- 0.3 m

0.3 m- 1 m

1 m- 10 m

10,- 300 m

Multi-kmLength

Distinguished by

Length & Packaging

Intra-chipIntra-ModuleIntra-CardBackplane / Card-to-Card

Cables – ShortCables – LongMAN & WANPHYSICAL Link Types

Reliability & cost vs. DRAM

Reliability, massive BW,

reliability

ReliabilityShared tech between servers & desktops

Shared tech. between servers & desktops

Dominated by FC

BW & latency to <60-250 meters

100-300m over RJ-45 /

CAT5 cabling, or wireless

Inter-operability

with “Everybody”

Key Characteristic

Maybe Never? (Wireless,

Building re-wiring, BW demand)

Traffic: HTML pages to laptops,..

Stds: 1G Ethernet, WiFi

Local Area Network

Coming

Traffic: Load/store coherency ops to other CPUs’cachesStds: Hyper-transport

SMP Coherency Bus

Coming laterNot yetScatteredNot yetSince 90sSince 2000sSince 80sUse of optics

Traffic: Load/Store to DRAM or Memory Fanout chipStds: DDR3/2/.

Traffic: Load/store to Hubs & bridges

Stds: Hyper-Transport

Traffic: Load/store to I/O adapters

Stds: PCI/PCIe

Traffic: Read/Write to disk, unshared

Stds: SAS, SATA

Traffic: Read/Write to disk, shared

Std: Fibre Channel

Traffic: Intra-application, or intra-distributed-application Stds: InfiniBand, 1G Ethernet, 10/40/100Enet

Traffic: IP

Stds: Ethernet, ATM, SONET,

Distinguished by

Function & Link Protocol

Memory BusMezzanine Bus

I/ODirect Attach Storage

Storage Area

Network

Cluster / Data Center

InternetLOGICAL Link Types

Link Technology Single-mode Optics Mixed multi-mode optics & copper Copper

HPC- and Data Center-Specific Optical Interconnect

OTu2B.4.pdf 6 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 7: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

7

Optical vs. Electrical - Cost-Effectiveness Link Crossover Length

Qualitative Summary: At short distances, copper is less expensive. At longer distances, optics is cheaper

Expense is measured several ways: (parts cost, design complexity, Watts, BW density, etc.)System design requires using optimal crossover length, using technology where appropriate

Link Cost vs. Distance

0.1

1

10

100

1000

0.001 0.01 0.1 1 10 100 1000 10000Tx-Rx distance (Meters)

Cost($/Gbps)

PCB Traces on a circuit board

SAN/Cluster Cables in one room

LANCables in walls

CampusCables

underground

MAN/WAN Cables rented

Optical

Copper

On-chipTraces on a single chip

O/E cost-effectiveness crossover length

$

$$$$

$$$

$$ Cost of card-edge connectors

Cost of optical transceiver

Cost of single-mode optics

Cost of opening up walls for cabling

Curves shown for ~2.5 Gbps

OTu2B.4.pdf 7 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 8: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

8

Cost-Effectiveness Link Crossover Length – Dependence on bit-rate

Over time, copper & optical get cheaper at pretty much the same rateThe crossover length at a particular bit-rate have stayed pretty constant

As bit-rates have risen, a higher percentage of overall interconnect have moved to opticsAt 25 Gb/s, it appears that the crossover distance is ~2 - 3 M. Copper only works in-rack.

Link Cost vs. Distance and Bandwidth

0.1

1

10

100

1000

0.001 0.01 0.1 1 10 100 1000 10000Tx-Rx distance (Meters)

Cost($/Gbps)

PCB Traces on a circuit board

SAN/Cluster Cables in one room

LANCables in walls

CampusCables

underground

MAN/WAN Cables rented

Optical

40

10

2.5.6

CopperCopper

40 Gb/s

10 Gb/s

2.5 Gb/s

.6 Gb/s

Optical

40 Gb/s

10 Gb/s

2.5 Gb/s

.6 Gb/sOn-chip

Traces on a single chip

O/E cost-effectiveness crossover lengths

$

$$$$

$$$

$$

OTu2B.4.pdf 8 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 9: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

9

Power Efficiency Study: Copper vs. Optical

OTu2B.4.pdf 9 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 10: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

10

Power Efficiency Design Example: 16 PF Scale Cabling Options

Thought Experiment: Imagine a 2014 Top-10 system – say 16 PF – Using POWER7-775 System Design

~16 PF System will require various lengths of links: <1m: Between 4 drawers of a SuperNode1-3m: Between 8 SuperNodes in 3-rack Building Blocks3-20m: Between “closely-spaced” Building Blocks (1/4 of other BBs in system)20-50m: Between “far-spaced” Building Blocks (3/4 of other BBs in system)

16 PF POWER 775 / PERCS system would need many many links

What if we interconnected with copper vs. optical?

<1meter 1-3 meter 3-20m 20-50m In

4-d

raw

erS

uper

Nod

e

In 3

-rack

Bui

ldin

gB

lock

of 1

/4 s

yste

mra

cks

of 3

/4 s

yste

mra

cks

Avg.# / drawer 96.00 1.75 30.00 96.00# drawers 2,048 2,048 2,048 2,048

Total # of 120Gbps Transceivers 196,608 3,584 61,440 196,608

POWER7-775 / PERCS 16 PF System: # of Links

OTu2B.4.pdf 10 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 11: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

11

16 PF-Scale Cabling Options: 10GBASE-T

Imagine we cabled this with “normal” 10G Ethernet (if it fit physically)Power utilization: ~3 Watts per 10G PHY transceiver (300mW/Gbps)

Inexpensive cables & connectors require high-power signal processing

At ~$1M per MWatt per year, with ~10-year machine life, 10GBase-T cabling would add >$165M in operating cost, above the machine cost

<1meter 1-3 meter 3-20m 20-50mAvg.# / drawer 96.00 1.75 30.00 96.00

# drawers 2,048 2,048 2,048 2,048

Total # of 120G XCVRs 196,608 3,584 61,440 196,608Total Power, MegaWatts

Total # of 10GBase-T PHYs 2,359,296 43,008 737,280 2,359,296

Power, Watts 7,077,888 129,024 2,211,840 7,077,888 16.5 Like this.. except 67 times denser

OTu2B.4.pdf 11 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 12: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

12

16 PF-Scale Cabling Options: Optimized Copper

Imagine we cabled it with improved “Active copper cable”, which allows lower power (75-150 mW/Gbps)

Better twin-ax cables w/active circuits *inside* good connectors reduce the signal-processing required: 1.5W/20Gbps (<20m), or 5W (20-50m) (i.e., 75-250 mW/Gbps, length-dependent)(…but it *still* won’t fit – connectors & cables are too big..)

Active copper saves >$80M vs. passive copper in operating costs,over 10 years

<1meter 1-3 meter 3-20m 20-50mAvg.# / drawer 96.00 1.75 30.00 96.00

# drawers 2,048 2,048 2,048 2,048

Total # of 120G XCVRs 196,608 3,584 61,440 196,608Total Power, MegaWatts

Total 20G Active cable ends 1,179,648 21,504 368,640 1,179,648

Cable power, Watts 1,769,472 32,256 552,960 5,898,240 8.3

OTu2B.4.pdf 12 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 13: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

13

16 PF-Scale Cabling Options: Optical

Optical interconnect allows lower power (25 mW/Gbps)VCSEL/MMF requires <3W per 120Gbps (length-independent)

10-year cost of electrical power: <$15M

The message: In comparison to “cheap” 10GBASE-T, optical interconnect saves roughly $150M in machine operating costs over 10 years.

*Plus* the connectors can actually fit in the system

Better interconnect saves money in other ways, tooCables are much smaller/lighter/easy to install and manageSignal integrity is more predictable across all lengths of cablesEfficient server utilization by moving jobs & data where most efficiently executed

<1meter 1-3 meter 3-20m 20-50mAvg.# / drawer 96.00 1.75 30.00 96.00

# drawers 2,048 2,048 2,048 2,048

Total # of 120G XCVRs 196,608 3,584 61,440 196,608Total Power, MegaWatts

Cable power, Watts 589,824 10,752 184,320 589,824 1.4

OTu2B.4.pdf 13 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 14: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

14

Data Center Networking

OTu2B.4.pdf 14 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 15: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

15

Data Center Dynamics, 2011

Data Centers are growing in scale incredibly quickly: 1999 “Large” data center: 5,000 ft22004 “Large” data center: 50,000 ft22009 “Large” data Center: 500,000 ft22011 (started): IBM/Range Technology Data Center in China (near Beijing): ~624,000 ft2

Power & Cooling Requirements growing nearly as fast2001: 1-2 supercomputer centers in the world needed 10 MW of power2011: dozens of 10 MW data centers worldwide,

US Gov’t planning 60 & 65 MW data centers

Power efficiency at all levels is criticalElectrical power is the major ongoing cost for data centers.

Note: Moore’s law doesn’t apply to power and cooling – but there are efficiencies to be had

OTu2B.4.pdf 15 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 16: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

16

Facebook Data Center in the Oregon Desert

Building-scale engineering required to support large-scale machines

MIXING: Dampers let dry desert air into the facilities penthouse level. In the winter months, when the outside air is very cold, warm return air can be mixed in.

FILTERING: Air passes through filters to stop desert particles and insects from entering the system.

MISTING: Bacteria is killed and minerals removed in the facilities water treatment area. The treated water is then sprayed as a fine mist into the air. Evaporative cooling ensues, cooling the air to between 65°- 80°. A relative humidity of 35-65% is reached, eliminating problems of static electricity. Filters remove water particles from entering the system.

MOVING: Energy efficient 5 horsepower centrifugal fans move the cool air through air shafts down to the server floor where the air travels through the open servers that are stacked on racks. Each rack holds 90 servers."><

POWER CONVERSION: Conventional data centers convert power a number of times before it’s used. Each conversion results in a loss of power. The custom servers run at a higher voltage and so can use power straight from the grid. First, power travels to a custom fabricated reactor power panel (where irregulataties are removed) and then to the servers themselves.

BATTERIES: The UPS system is a standby system. In the case of a power failure, batteries will provide 45 seconds of power to the servers until generators kick in.

OPEN CASING; Servers were designed without a cover to allow the air to freely pass through and cool the circuitry

FANS: The servers were designed with bigger fans that use less energy.

REMOVING: Exhaust fans remove the server return air (typically about 95°).

http://www.oregonlive.com/business/index.ssf/high_tech_meets_high-desert.html

OTu2B.4.pdf 16 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 17: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

17

Raleigh Leadership Data Center

Data center design reflects key strategies:Flexibility for growth for 20-30 years while IT equipment changes every 3-5 years. Integrated management of IT and data center infrastructure. Energy efficient power & cooling systems (LEED Gold) with full redundancy.

http://www-935.ibm.com/services/us/cio/smarterdc/rtp_popup.htmlhttp://www-935.ibm.com/services/us/cio/smarterdc/rtp_popup.html

OTu2B.4.pdf 17 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 18: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

18

Piping to support water cooling

3 MW electtrical switchgear from two independent sources

Raleigh Leadership Data Center – Equipment & photos

Modern data center infrastructure is heavy-duty industrial-scale factory-style equipment

Two-cell 1300 ton cooling tower with variable speed fans

1300-ton centrifugal chiller with variable speed drive

60,000 sq. ft. of IT raised floor space

3 MW of wet cell battery capacity for 15 minutes of backup

Three 50,000 gallon thermal storage tanks

Water side economizer for 3900 hrs/yr of free cooling

Six 2.5MW diesel generators for emergency power

OTu2B.4.pdf 18 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 19: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

19

Data Center Networking – A few key observations

Improved DC networks are radically changing how data center apps run: Old style: “North / South” traffic: Each server handles 1 app for N desktop clients

Packets flowing into a data center go to specific servers, which sends packets back out.

New style: “East / West” traffic: N servers handle M apps as a virtualized pool for N clientsPackets flowing into a data center get flexibly directed to one of many servers, which generate *many* more server-to-server packets, and some packets go back out.

BW constraints (and *manageability* of traffic) still limit flexibility.

Energy-efficient links are key – but higher-performance networks are more importantHigh-BW links allow flexible placement of jobs & data high server utilization key benefit.

Data Center Cost Distribution

Network1%

Software33%

UPS5%

Cooling2%Bldg.

4%

IT11%

Power5%

Hardware39%

Credit: Ken Brill,The Uptime Institute

Hmm…Hmm…

OTu2B.4.pdf 19 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 20: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

20

InfiniBand

OTu2B.4.pdf 20 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 21: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

21

InfiniBand Link Bandwidth Roadmap

56G-IB-FDR shipping now -- HCAs, switches, passive & active (copper & optical) cablesInteroperability tested in Fall 2011 Plugfest

104G-IB-EDR expected in early 2013 – some cables demo’d already

OTu2B.4.pdf 21 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 22: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

22

Brian Sparks IBTA Marketing Working Group Co-Chair

InfiniBand System Efficiency

OTu2B.4.pdf 22 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 23: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

23

Top500: Impact of Interconnect on System Scaling

Left: Analysis of Top500 systems in terms of Interconnect Family. Majority of processing power is interconnected with InfiniBand interconnect2011: Custom & Proprietary Interconnects grew greatly – greater system-level requirements.

Right: Impact of Interconnect on System Cost/ PerformanceSwitching from Gigabit Ethernet to InfiniBand allows either 65% fewer servers, or 65% better performance withsame system size (on Linpack benchmark)

Interconnect Family Top500 Treemap –Performance (Nov.2011)

InfiniBand 39%

Gigabit Ethernet 19%

Proprietary 13%

Cray 3.5%

Custom 24%

Linpack Rmax vs. Core count -- Nov. 2011 Top500 Data Ethernet / InfiniBand / BlueGene-P / PERCS(Power7-IH)

10,000

100,000

1,000,000

1000 10000 100000Cores

Linpack, Rmax

Gigabit Ethernet IB-QDR - Xeon

IB FDR - Xeon 10G EthernetBlue Gene/P Solution IB-Power6

IB-Power7 PowerFabric-P7IH

GEnet: 9.2K Cores, 55.6 TF

IB: 5.6K cores, 56 TF

5-10x higher system scalability with IB vs. GigE

Comparing systems with Xeon CPUs, using IB vs. Gigabit Ethernet: ~65% more performance per core

IB: 9.2K Cores, 92 TF

IB - Power6Faster cores (4.7 GHz)

14 of 500 systems on 11/2011 list use 10GE

Top500 Performance Threshold, Nov. 2011 - 50.0 TF

PowerFabric-P7IH

OTu2B.4.pdf 23 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 24: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

24

HPC Systems Networking

OTu2B.4.pdf 24 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 25: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

25

Rack-to-rack cabling: Recent history in HPC systems

Over time: higher bit-rates, similar lengths, more use of optics, denser connector packing

IBM Federation Switch for ASCI Purple (LLNL)- Copper for short-distance links (≤10 m)- Optical for longer links (20-40m)~3000 parallel links 12+12@2Gb/s/channel

• 4X DDR InfiniBand (5Gb/s)

• 55 miles of Active Optical Cables

Combination of Electrical & Optical Cabling

2005 2008: 1 PF/s2002: 40 TF/s

NEC Earth Simulator• all copper, ~1 Gb/s

IBM Roadrunner (LLNL) Cray Jaguar(ORNL)

• InfiniBand • 3 miles of optical

cables, longest = 60m

*http://www.nccs.gov/jaguar/

*http://www.lanl.gov/roadrunner/

OTu2B.4.pdf 25 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 26: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

26

Blue Gene/Q

1. Chip:16+2 μP

cores

2. Single Chip Module

4. Node Card:32 Compute Cards, Optical Modules, Link Chips; 5D Torus

5a. Midplane: 16 Node Cards

6. Rack: 2 Midplanes

7. System: 96 racks, 20PF/s

3. Compute card:One chip module,16 GB DDR3 Memory,Heat Spreader for H2O Cooling

5b. IO drawer:8 IO cards w/16 GB8 PCIe Gen2 x8 slots3D I/O torus

•Sustained single node perf: 10x P, 20x L• MF/Watt: (6x) P, (10x) L (~2GF/W, Green 500 criteria)• Software and hardware support for programming models for exploitation of node hardware concurrency

OTu2B.4.pdf 26 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 27: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

27

Midplane connectors

Compute cards

Node DCAs

Optics modules / Link chips

Hose quick connects

BG/Q Compute Drawer – Technical Drawing

OTu2B.4.pdf 27 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 28: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

28

BG/Q Compute Drawer

OTu2B.4.pdf 28 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 29: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

29

Compute Drawer – Rear Isometric View, showing optics modules

Optics modules placed in sockets (mechanically retained by features in socket)

OTu2B.4.pdf 29 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 30: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

30

BG/Q Input/Output Drawer

Full height, 25W PCI cards,

Ball bearing slides for field maintenance

12-Fiber connections

Clock input

48V power input

8 compute cards(different PN than in compute rack because of heatsink vs cold plate)

Axial fans

OTu2B.4.pdf 30 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 31: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

31

0.4840.370

0.44

0.250

0.635

0.825 0.852

1.376

1.680

2.097

0

0.5

1

1.5

2

2.5

POWER72011

BG/P2007

RR 2008

Cray XT52009

TianHe-1A 2010

Fujitsu K2010

Titech2010

Nagasaki2011

BG/QDD12010

BGQDD22011

Linp

ack

GF/

Wat

tSource: www.green500.org

At $.10/kWh => 1MW savings in power saves $1M/year. TCO saving is much more.Low power is key to scaling to large systems

System Power Efficiency (Green500 06/2011)

OTu2B.4.pdf 31 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 32: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

32

Blue Gene/Q

32

Industrial Design

BQC DD2.04-rack system5D torus

32 Node Board

OTu2B.4.pdf 32 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 33: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

3333

All data center power & cooling infrastructure included in compute/storage/network rackNo need for external power distribution or computer room air handling equipment.All components correctly sized for max efficiency – very good 1.18 Power Utilization EfficiencyIntegrated management for all compute, storage, network, power, & thermal resources.Scales to 512K P7 cores (192 racks) – without any other hardware except optical fiber cables

PERCS/Power 775 “Data-Center-In-A-Rack” System Architecture

Integrated Storage – 384 2.5” HDD or SSD drives /drawer230 TBytes\drawer (w/600 GB 10K SAS disks), 154 GB/s BW/drawer, software-controlled RAID, up to 6/rack (replacing server drawers) (up to 1.38 PBytes / rack)

Integrated Cooling – Water pumps and heat exchangersAll heat transferred directly to building chilled water – no thermal load on room

Integrated Power Regulation, Control, & DistributionRuns off any building voltage supply world-wide (200-480 VAC or 370-575VDC), converts to 360 VDC for in-rack distribution. Full in-rack redundancy and automatic fail-over, 4 power cords. Up to 252 kW/rack max / 163 kW Typ.

Servers – 256 Power7 cores / drawer, 1-12 drawers / rackCompute: 8-core Power7 CPU chip, 3.7 GHz, 12s technology, 32 MB L3 eDRAM/chip, 4-way SMT, 4 FPUs/core, Quad-Chip Module; >90 TF / rack

No accelerators: normal CPU instruction set, robust cache/memory hierarchyEasy programmability, predictable performance, mature compilers & libraries

Memory: 512 Mbytes/sec per QCM (0.5 Byte/FLOP), 12 Terabytes / rackExternal IO: 16 PCIe Gen2 x16 slots / drawer; SAS or external connectionsNetwork: Integrated Hub (HCA/NIC & Switch) per each QCM (8 / drawer), with 54-port switch, including total of 12 Tbits/s (1.1 TByte/s net BW) per Hub:

Host connection: 4 links, (96+96) GB/s aggregate (0.2 Byte/FLOP) On-card electrical links: 7 links to other hubs, (168+168) GB/s aggregateLocal-remote optical links: 24 links to near hubs, (120+120) GB/s aggregateDistant optical links: 16 links to far hubs (to 100M), (160+160) GB/s aggregatePCI-Express: 2-3 per hub, (16+16) to (20+20) GB/s aggregate

FrontFrontRearRear

OTu2B.4.pdf 33 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 34: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

34

P7-IH – Cable Density

Many many optical fibersEach of these cables is a 24-fiber multimode cable, carrying (10+10) GBytes/sec of traffic

46 Terabit/s Optical BackplaneUp to 3 per rack

(100+100) Gb/s Optical CablesUp to 1,536 per rack

OTu2B.4.pdf 34 1/23/2012 11:50:19 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 35: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

3535

P7 IH System Hardware – Node Front View (Blue Waters: ~1200 Node drawers)

P7 QCM (8x)

Hub Module (8x)

D-Link Optical InterfaceConnects to other Super Nodes

360VDC Input Power Supplies

Water Connection

L-Link Optical InterfaceConnects 4 Nodes to form Super Node

MemoryDIMM’s (64x)

MemoryDIMM’s (64x)

PCIe Interconnect

1m W x 1.8m D x 10cm H

IBM’s HPCS Programpartially supported by

MLC ModuleHub Assembly

PCIe Interconnect

D-Link Optical InterfaceConnects to other Super Nodes

Avago microPODTM All off-node communication optical

OTu2B.4.pdf 35 1/23/2012 11:50:20 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 36: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

36

Hub Module – MCM with Optical I/Os

This shows the Hub module with full complement of Optical I/Os. Module in photo is partially assembled, to show construction – full module HW is symmetric

Heat Spreader for Optical DevicesCooling / Load Saddle for Optical Devices

Optical Transmitter/Receiver Devices 12 channel x 10 Gb/s 28 pairs per Hub - (2,800+2,800) Gb/s of optical I/O BW

Heat Spreader over HUB ASIC

Strain Relief for Optical RibbonsTotal of 672 Fiber I/Os per Hub, 10 Gb/s each

Hub ASIC (Under Heat Spreader)

OTu2B.4.pdf 36 1/23/2012 11:50:20 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 37: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

37

Overview: Recent strategic directions in IBM Research

OTu2B.4.pdf 37 1/23/2012 11:50:20 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 38: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

38

IBM Optical Interconnect Research: Meeting Key Challenges for Optical Links

Increasing aggregate system performance will demands more optical linksBandwidth demands steadily increasing higher channel rates, more parallel channelsOptical link budgets substantially more challenging at higher data ratesDensity requirements becoming increasingly important as number of links in systems grows

IBM Research has active programs in a variety of areas of optical interconnectTransceiver Opto-Mechanical Design – Advanced Packaging, 3D Chip-Stacking and silicon carriers, Through silicon optical vias.

Example: 24 + 24 channel highly integrated transceiversOptical PCBs – Polymer Optical Waveguides, both above and in PCBsAdvanced Circuit Design in SiGe & CMOS Drivers & Receivers

Example: >30Gb/s SiGe links, 25 Gb/s CMOS linksOptical Transmitter Equalization for better link margin, jitter, power efficiency

Silicon Photonics

OTu2B.4.pdf 38 1/23/2012 11:50:20 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 39: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

39

24-channel 850-nm transceivers packaged on Si carriers

850-nm is the datacom industry standard wavelength Multiple suppliers, low-cost, optimized MMF fiber bandwidth

Retain the highly integrated packaging approach: dense Optomodules that “look”like surface-mount electrical chip carriersSi carrier platform: high level of integration of the electrical and optical components with high density interconnection, requires through-silicon-vias(both optical and electrical)

Terabus 850 nm24TX + 24 RX Transceiver

2x12 VCSEL and PD arrays2 130nm CMOS ICs

TSV Si carrierOptical vias in Si carrierSide-by-side flip chip assembly

LDD

Organic Carrier

RX

O-PCB

Si Carrier

VCSEL

Lens Arrays Polymer Waveguides

PD

Conventional ICsOptochip

LDD LDD

Organic Carrier

RX RX

O-PCB

Si Carrier

VCSELVCSEL

Lens Arrays Polymer Waveguides

PDPD

Conventional ICsOptochip

Optically enabled MCM (OE-MCM)

OTu2B.4.pdf 39 1/23/2012 11:50:20 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 40: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

40

Assembled 24-channel 850-nm modules for optical PCB links

First row of solder joins visible beneath the Optochip

Flip-chip assembly of OE and CMOS chips to Si-carrier using AuSn solder “micro bumps”

Flip-chip attachment of Si-carrier Optochip to organic carrier using PbSnsolder transfer process

OTu2B.4.pdf 40 1/23/2012 11:50:20 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 41: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

41

360Gb/s, 24-channel, 850-nm Transceiver Modules Demonstrated

Highest aggregate bandwidth for any 850-nm parallel optical module: 360 Gb/s bi-directionalPower efficiency < 10 pJ/bit

-18 -16 -14 -12 -10 -8 -6 -4

-5

-6

-7

-8

-9

-10-11-12

Average Power (dBm)

log 10

[BER

]

10Gb/s

12.5Gb/s

15Gb/s

• F. E. Doany et al.,"Terabit/s-Class 24-Channel Bidirectional Optical Transceiver Module Based on TSV Si Carrier for Board-Level Interconnects," ECTC 2010, June 2010.

OTu2B.4.pdf 41 1/23/2012 11:50:20 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 42: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

42

“Holey” Optochip –CMOS IC with optical though-silicon-vias

• C. L. Schow, et al.,"A 24-Channel, 300 Gb/s, 8.2 pJ/bit, Full-Duplex Fiber-Coupled Optical Transceiver Module Based on a Single “Holey” CMOS IC," J. Lightwave Tech., Vol. 29, No. 4, Feb. 2011.

(24+24)x12.5 Gbps single-chip transceiver Flip-chip mounting of VCSELs & PDs directly on driver/receiver circuits300 Gb/s aggregate BW at 8.2 pJ/bit,

OTu2B.4.pdf 42 1/23/2012 11:50:20 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 43: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

43C.

o-PCB preparation and assembly

45° turning mirrors formed by laser ablating air cavities in the WGsTotal internal reflection (TIR) mirrors, 0.5-0.7 dB loss

48 element WG lens arrays aligned to the flex WGWG flex attached to PCB with pre-deposited BGA solder balls

Turning mirrors & lens array

Flexible WG

BGA

Alignment pins

O-PCB

LDD LDD

Organic Carrier

RX RX

O-PCB

Si CarrierVCSELVCSEL

Lens Arrays

Flex Polymer Waveguides

PDPD

Optomodule

40μm

35μm

38μm

WG cross-section

5 10 15 20 25 30 35 40 450

1

2

3

4

5

6

7

8

Channel Number

Tota

l Los

s (d

B)

OTu2B.4.pdf 43 1/23/2012 11:50:20 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 44: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

44

850-nm Optical PCB in Operation

15 channels each direction at 15 Gb/s, BER < 10-12

225 Gb/s bi-directional aggregate145 mW/link = 9.7 pJ/bit

15 Gb/s

15 + 15 channels

4

8

W

A

V

E

G

U

I

D

E

S

4

8

W

A

V

E

G

U

I

D

E

S

• F. E. Doany et al.,"Terabit/s-class board-level optical interconnects through polymer waveguides using 24-channel bidirectional transceiver modules,“ ECTC 2011 June 2011.• C. L. Schow et al., "225 Gb/s bi-directional integrated optical PCB link," OFC 2011, post-deadline paper, Mar. 2011.

OTu2B.4.pdf 44 1/23/2012 11:50:21 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 45: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

45

SiGe 8HP: Pushing the Speed Limits of VCSEL Links

FFE circuit included in TX output for VCSEL pre-distortion/pre-emphasis and in RX output to drive through packages and boards

out

VCSEL

V_LD

in

LDD chip boundary

delayVCC_PA

FFE output driver

PD

optical attenuator

VCC_OS

RX chip boundary

delayVCC_LA

FFE output driver

VCC_OSVCC_TIA

offset cancelation

offset cancelation

OTu2B.4.pdf 45 1/23/2012 11:50:21 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 46: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

46

Record SiGe 8HP full-link: 30 Gb/s using 10Gb/s OEs

First 30Gb/s VCSEL based link10 Gb/s VCSELsApplications for multimode reference receiverNovel TIA designOperates with margin at 30G100m transmission with minimal penalty verified at 25 Gb/s

-18 -16 -14 -12 -10

-5

-6

-7

-8

-9

-10

-11

-12

Pavg (dBm)

log 10

[BER

]

20 Gb/s25 Gb/s30 Gb/s

* **

13.3 ps

1.1 mW

13.3 ps

210 mV

-0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4

-6

-7 -8 -9

-10-11-12

Time (UI)

Log 10

[BER

]

20 Gb/s

-0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4

-6 -7 -8 -9

-10-11-12

Time (UI)Lo

g 10[B

ER]

25 Gb/s

0.44 UI

0.56 UI

VCSEL output, 30 Gb/s

RX output, 30 Gb/s

• C. L. Schow and A. V. Rylyakov, “30 Gbit/s, 850 nm, VCSEL-based optical link,” Electron. Lett., September 1, 2011.

OTu2B.4.pdf 46 1/23/2012 11:50:21 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 47: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

47

Applying Signal Processing to Low Power Optical Links

FFEOutput

VCSEL

VDDLD

VDD_PA

VDD_OS VDD_OS

PAMainBuffer

Delay

vb_delay

TapBuffer

vb_tap

Input

VDD_OS

LDD Chip boundary

MMF

_Predriver

PD

VDD_TIA

TIA

VDD_LA VDD_IO

Chip Boundary

Limiting Amplifier Output Buffer

“Channel”

Electrical links have increasingly used signal processing to improve performance…

– optics can do this too!Pre-distortion compensation for combined VCSEL/TIA and LA:

– Increases obtainable link speed to 20Gb/s– 5.7pJ/bit total link power consumption

while maintaining BER < 10-12 and >200mVppd at RX outputs

5 7.5 10 12.5 15 17.5 20 22.52

34

56

78

910

1112

Power Efficiency vs. Data Rate

Data Rate (Gb/s)

Pow

er E

ffici

ency

(pJ/

bit)

Without TX pre-distortionWith TX pre-distortion

• C. L. Schow et al. "Transmitter pre-distortion for simultaneous improvements in bit-rate, sensitivity, jitter, and power efficiency in 20 Gb/s CMOS-driven VCSEL links," OFC 2011, post deadline paper, Mar. 2011.

OTu2B.4.pdf 47 1/23/2012 11:50:21 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 48: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

48

FFE Equalizers for Both TX and RX Outputs

Feed-Forward Equalizer (FFE) circuit for adjustable output pre-emphasis

LA

MainBuffer

Delay

VBDELAY TapBuffer

VBTAP

Input FFE Output

Main Buffer Output

Tap Buffer Output

FFE Output

Delay

Tap weight

10Gb/s

20Gb/s

Feed-Forward Equalizer (FFE) design leveraging extensive electrical serial link designEqualization heavily applied to VCSEL outputs for improved link performance first demonstration

OTu2B.4.pdf 48 1/23/2012 11:50:21 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 49: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

49

Double Equalized Links: 20 Gb/s

Equalizers enable 20Gb/s operationDramatic improvements in eye opening

Additional 0.22 UI (22 ps) eye opening, even at 10 Gb/s

Pattern Generator

Error-detector

50-μm MMF

PRBS 27-1

Variable Attenuator

Oscilloscope

10” NELCO 4000Oscilloscope

E O O EE

FFEFFE

E O O EE

FFEFFE

TX ER = 2.0TX output power:OMA = -1.4 dBmPavg = +0.3 dBm

With TX and RX EQNo Equalization

TX OUT

RX OUT

After Board

-16 -14 -12 -10 -8 -6 -4 -2 0

-5

-6

-7

-8

-9

-10-11-12

Double-Equalized Link, 10" Board

Pavg (dBm)

log1

0[ B

it Er

ror R

atio

]

10G12.5G15G17.5G20G

3.9RX Equalizer(included in RX total)

123.6RX Total

5.4TX Equalizer (included in TX total)

82.7TX Total

27.3RX_TIA65.1RX_LA31.2RX_IO

206.3Link Total

10.7VCSEL23TX_OS49TX_PA

Power (mW)

3.9RX Equalizer(included in RX total)

123.6RX Total

5.4TX Equalizer (included in TX total)

82.7TX Total

27.3RX_TIA65.1RX_LA31.2RX_IO

206.3Link Total

10.7VCSEL23TX_OS49TX_PA

Power (mW)

• A. V. Rylyakov et al., “Transmitter Pre-Distortion for Simultaneous Improvements in Bit-Rate, Sensitivity, Jitter, and Power Efficiency in 20 Gb/s CMOS-driven VCSEL Links,” J. of Lightwave Technol., 2012.

OTu2B.4.pdf 49 1/23/2012 11:50:21 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 50: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

50

Extending CMOS links to 25 Gb/s

-18 -16 -14 -12 -10 -8 -6 -4

-5

-6

-7

-8

-9

-10

-11

-12

Average Power (dBm)

log 10

[BE

R]

22 Gb/s20 Gb/s17.5 Gb/s15 Gb/s10 Gb/s VCSEL output RX output

15Gb/s

20Gb/s

25Gb/s

22Gb/s

90-nmCMOSLDD

90-nmCMOSRX

BERT

oscilloscopePG

8 10 12 14 16 18 20 22 24 26 280

1

2

3

4

5

6

7

8Power Efficiency vs. Data Rate

Data Rate (Gb/s)

Pow

er E

ffici

ency

(pJ/

bit)

Links operate up to 25 Gb/s: a first for CMOSRecord power efficiencies: 2.6pJ/bit @ 15 Gb/s, 3.1 pJ/bit @ 20 Gb/sTransmitter equalization will likely yield further improvement

• C. L. Schow et al., “A 25 Gb/s, 6.5 pJ/bit, 90-nm CMOS Based Multimode Optical Link” Submitted to IEEE Photonics Technol. Lett., 2011.

OTu2B.4.pdf 50 1/23/2012 11:50:21 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 51: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

51

Silicon Photonics-Related: Coupling to on-chip waveguides

Edge-coupling of optical waveguides in silicon photonics chip matches well with standard IC packaging practice & power/cooling requirements. Key problem: low-loss coupling to standard optical fiber

• F. E. Doany et al., “Multichannel High-Bandwidth Coupling of Ultradense Silicon Photonic Waveguide Array to Standard-Pitch Fiber Array”, JLT, Vol. 29, No. 4, Feb.2011

OTu2B.4.pdf 51 1/23/2012 11:50:21 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 52: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

52

Looking Forward: Exascale Systems

OTu2B.4.pdf 52 1/23/2012 11:50:21 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 53: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

53

Evolution of Supercomputer-scale systems – 1980s-2020s

In 2018-2020, we’ll be building Exascale systems – 1018 ops/sec – with 10s of millions of processing cores, near billion-way parallelism

Yes, there are apps that can use this processing power: Molecular-level cell simulations, Modeling brain dynamics at level of individual neurons, Multi-scale & multi-rate fluid dynamics, …

Massive interconnection (BW & channel count) will be needed - within & between racks.

Supercomputing 2000s: 10,000s of CPUs in 100s of racks

Supercomputing - 1980s 1-8 processors in 1 rack

Supercomputing 2020s: 10M to >100M CPU cores,

>500 racks?

??

OTu2B.4.pdf 53 1/23/2012 11:50:21 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 54: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

54

2015-2020 – Exascale Computing Systems

We’re expecting to need to build balanced ExaFLOP/s scale systems in ~2018100-Million to 1 Billion-way parallelism

Roadmaps to Exascale: well explored in DARPA/IPTO industry-wide study“ExaScale Computing Study: Technology Challenges in Achieving Exascale Systems”, by Peter

Kogge et. al., http://www.nd.edu/~kogge/reports.html(Peter Kogge is a former IBM Fellow, now at Notre Dame)

Key points regarding interconnect / networking:“The single most difficult and pervasive challenge perceived by the study group dealt with energy, namely,...energy per operation”“[The] energy in data transport will dwarf the traditional computational component in future Exascale systems....particularly so for the largest data center class.” [italics added]

Exaggerating a bit: Energy for data transport is *the* problem for exascale systems~ 200x more energy needed to transport a bit from a nearest-neighbor chip than to operate on it.

Energy needed for a floating-point operation (~’13-’16): 0.1-0.05 pJ/bitEnergy needed for data transport on-card, ~3-10 inches: 2-10 pJ/bit , up to 200x higherEnergy needed for data transport across a big system: ~20-100 pJ/bit up to 2,000x higher

Assume: 3-7-hop network diam., 3-8 pJ/bit per link for transmission, 2 pJ/bit routing in ASIC

Yes, 100 Million to Billion-way systems Yes, 100 Million to Billion-way systems

Yes, I know the software people will disagree, --software is another critical problem for exascale.

Yes, I know the software people will disagree, --software is another critical problem for exascale.

OTu2B.4.pdf 54 1/23/2012 11:50:21 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 55: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

55

The Road to Exascale

Assumptions: Based on typical historical trends (see, e.g., top500.org and green500.org):10X performance, 4 years later, costs 1.5X more dollars10X performance, 4 years later, consumes 2X more power

20MW$500M1000PF(1EF)2020

10MW$340M100PF2016

5MW$225M10PF2012

2.5MW$150M1PF2008

Total Power ConsumptionMachine CostPeak

PerformanceYear

Acknowledgment: J. Kash

OTu2B.4.pdf 55 1/23/2012 11:50:21 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 56: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

56

How much optics, and at what cost?

Target >0.2Byte/FLOP I/O bandwidth plus >0.2Byte/FLOP memory bandwidth2008 optics replaces electrical cables (0.012Byte/FLOP, 40mW/Gb/s)2012 optics replaces electrical backplane (0.1Byte/FLOP, 10% of system power/cost)2016 optics replaces electrical PCB (0.2Byte/FLOP, 20% of system power/cost)2020 optics on-chip (or to memory) (0.4Byte/FLOP, 40% of system power/cost)

8MW

2MW

0.5MW

0.012MW

Optics Power Consumption

$200M400PB/sec(4 x 109 Gb/s)

1000PF(1EF)2020

$68M20PB/sec(2 x 108 Gb/s)100PF2016

$22M1PB/s(107 Gb/s)10PF2012

$2.4M0.012PB/s(1.2 x 105 Gb/s)1PF2008

Optics Cost(Bidi) Optical Bandwidth

Peak PerformanceYear

Acknowledgment: J. Kash

OTu2B.4.pdf 56 1/23/2012 11:50:21 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 57: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

57

Cost and Power per bit (unidirectional)

Future directions for optical cables:Lower cost (reducing >60%/year)Much more BW (increasing >210%/year)Much lower power (improving >45%/year)

Variety of methods for reaching these targetsHigher bitrates: 10-20-20 Gb/s per channel Smaller footprint for O/E modulesMove optics closer to logicNew technologies

1mW/Gb/s

5mW/Gb/s

25mW/Gb/s

50mW/Gb/s(50pJ/bit)

Optics Power Consumption

$25 per Tb/s8x108

(@ ~25 Gb/s? )1000PF(1EF)2020

$170 per Tb/s4x107

(@ 14-25 Gb/s)100PF2016

$1,100 per Tb/s2x106

(@ 10Gb/s)10PF2012

$10,000 per Tb/s48,000(@ 5Gb/s)1PF2008

Optics Costnumber of optical channels

Peak PerformanceYear

Acknowledgment: J. Kash

OTu2B.4.pdf 57 1/23/2012 11:50:21 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 58: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

58

Summary

OTu2B.4.pdf 58 1/23/2012 11:50:21 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 59: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

59

Summary Remarks

The future is bright. Optics will play a steadily-increasing role in systems – Must feed the transistors

Bandwidth-density, power-efficient data transport, reliable signal integrity

Parallel optical interconnects are fast replacing copper cables today

Lots of interesting systems-level challenges, lots of technologies to choose from

Optical interconnect for supercomputers and other high-end compute systems will likely grow at >200% CAGR (deployed Gb/s), assuming cost can be improved at 60% CAGR ($/Gb/s) and power can be improved at 45% CAGR (mw/Gb/s) at the same time.

We’re banking on this happening – the question is (/ questions are): How?

For Exascale systems in 2015-2020, interconnect is *the* interesting technical problem.CPUs/GPUs/SPUs/APUs get the glory, and are interesting business-wise, but technically, FLOPsare easy. Storage capacity is harder, but technically requires no breakthroughs.

Data transfer – chip/chip, card/card, rack/rack – is *hard*. Will account for >80% of the system power, & 50-90% (app-dependent) of performance

OTu2B.4.pdf 59 1/23/2012 11:50:21 AM

OFC/NFOEC Technical Digest © 2012 OSA

Page 60: Optical Interconnect Opportunities in Supercomputers and ......Optical Interconnect Opportunities in Supercomputers and High End Computing OFC 2012 Tutorial – Category 14. Datacom,

Thank you kindly

OTu2B.4.pdf 60 1/23/2012 11:50:21 AM

OFC/NFOEC Technical Digest © 2012 OSA


Recommended