+ All Categories
Home > Documents > Interconnect Basics

Interconnect Basics

Date post: 02-Jan-2016
Category:
Upload: channing-melendez
View: 60 times
Download: 0 times
Share this document with a friend
Description:
Interconnect Basics. Where Is Interconnect Used?. To connect components Many examples Processors and processors Processors and memories (banks) Processors and caches (banks) Caches and caches I/O devices. Interconnection network. Why Is It Important?. - PowerPoint PPT Presentation
Popular Tags:
41
Interconnect Basics 1
Transcript
Page 1: Interconnect Basics

Interconnect Basics

1

Page 2: Interconnect Basics

Where Is Interconnect Used? To connect components

Many examples Processors and processors Processors and memories (banks) Processors and caches (banks) Caches and caches I/O devices

2

Interconnection network

Page 3: Interconnect Basics

Why Is It Important? Affects the scalability of the system

How large of a system can you build? How easily can you add more processors?

Affects performance and energy efficiency How fast can processors, caches, and memory

communicate? How long are the latencies to memory? How much energy is spent on communication?

3

Page 4: Interconnect Basics

Interconnection Network Basics Topology

Specifies the way switches are wired Affects routing, reliability, throughput, latency, building

ease

Routing (algorithm) How does a message get from source to destination Static or adaptive

Buffering and Flow Control What do we store within the network?

Entire packets, parts of packets, etc? How do we throttle during oversubscription? Tightly coupled with routing strategy

4

Page 5: Interconnect Basics

Topology Bus (simplest) Point-to-point connections (ideal and most costly) Crossbar (less costly) Ring Tree Omega Hypercube Mesh Torus Butterfly …

5

Page 6: Interconnect Basics

Metrics to Evaluate Interconnect Topology Cost Latency (in hops, in nanoseconds) Contention

Many others exist you should think about Energy Bandwidth Overall system performance

6

Page 7: Interconnect Basics

Bus+ Simple+ Cost effective for a small number of nodes+ Easy to implement coherence (snooping and

serialization)- Not scalable to large number of nodes (limited

bandwidth, electrical loading reduced frequency)- High contention fast saturation

7

MemoryMemoryMemoryMemory

Proc

cache

Proc

cache

Proc

cache

Proc

cache

0 1 2 3 4 5 6 7

Page 8: Interconnect Basics

Point-to-Point Every node connected to every other

+ Lowest contention+ Potentially lowest latency+ Ideal, if cost is not an issue

-- Highest cost O(N) connections/ports per node O(N2) links-- Not scalable-- How to lay out on chip?

8

0

1

2

3

4

5

6

7

Page 9: Interconnect Basics

Crossbar Every node connected to every other (non-blocking)

except one can be using the connection at any given time Enables concurrent sends to non-conflicting destinations Good for small number of nodes

+ Low latency and high throughput- Expensive- Not scalable O(N2) cost- Difficult to arbitrate as N increases

Used in core-to-cache-banknetworks in- IBM POWER5- Sun Niagara I/II

9

0 1 2 3 4 5 6 7

0

1

2

3

4

5

6

7

Page 10: Interconnect Basics

Another Crossbar Design

10

0 1 2 3 4 5 6 7

0

1

2

3

4

5

6

7

Page 11: Interconnect Basics

Sun UltraSPARC T2 Core-to-Cache Crossbar High bandwidth

interface between 8 cores and 8 L2 banks & NCU

4-stage pipeline: req, arbitration, selection, transmission

2-deep queue for each src/dest pair to hold data transfer request

11

Page 12: Interconnect Basics

Buffered Crossbar

12

OutputArbiter

OutputArbiter

OutputArbiter

OutputArbiter

FlowControl

FlowControl

FlowControl

FlowControl

NI

NI

NI

NI

BufferedCrossbar

0

1

2

3

NI

NI

NI

NI

BufferlessCrossbar

0

1

2

3

+ Simpler arbitration/scheduling

+ Efficient support for variable-size packets

- Requires N2 buffers

Page 13: Interconnect Basics

Can We Get Lower Cost than A Crossbar? Yet still have low contention?

Idea: Multistage networks

13

Page 14: Interconnect Basics

Multistage Logarithmic Networks Idea: Indirect networks with multiple layers of

switches between terminals/nodes Cost: O(NlogN), Latency: O(logN) Many variations (Omega, Butterfly, Benes, Banyan,

…) Omega Network:

14

000

001

010011

100

101

110111

000

001

010011

100

101

110111

Omega Network

conflict

Page 15: Interconnect Basics

Multistage Circuit Switched

More restrictions on feasible concurrent Tx-Rx pairs But more scalable than crossbar in cost, e.g., O(N logN) for

Butterfly15

0

1

2

3

4

5

6

7

0

1

2

3

4

5

6

7

2-by-2 crossbar

Page 16: Interconnect Basics

Multistage Packet Switched

Packets “hop” from router to router, pending availability of the next-required switch and buffer

16

0

1

2

3

4

5

6

7

0

1

2

3

4

5

6

7

2-by-2 router

Page 17: Interconnect Basics

Aside: Circuit vs. Packet Switching Circuit switching sets up full path

Establish route then send data (no one else can use those links)+ faster arbitration-- setting up and bringing down links takes time

Packet switching routes per packet Route each packet individually (possibly via different

paths) if link is free, any packet can use it-- potentially slower --- must dynamically switch+ no setup, bring down time+ more flexible, does not underutilize links

17

Page 18: Interconnect Basics

Switching vs. Topology Circuit/packet switching choice independent of

topology It is a higher-level protocol on how a message gets

sent to a destination

However, some topologies are more amenable to circuit vs. packet switching

18

Page 19: Interconnect Basics

Another Example: Delta Network Single path from source to

destination

Does not support all possible permutations

Proposed to replace costly crossbars as processor-memory interconnect

Janak H. Patel ,“Processor-Memory Interconnections for Multiprocessors,” ISCA 1979.

19

8x8 Delta network

Page 20: Interconnect Basics

Another Example: Omega Network Single path from source

to destination

All stages are the same

Used in NYU Ultracomputer

Gottlieb et al. “The NYU Ultracomputer-designing a MIMD, shared-memory parallel machine,” ISCA 1982.

20

Page 21: Interconnect Basics

Ring+ Cheap: O(N) cost- High latency: O(N)- Not easy to scale - Bisection bandwidth remains constant

Used in Intel Haswell, Intel Larrabee, IBM Cell, many commercial systems today

21

M

P

RING

S

M

P

S

M

P

S

Page 22: Interconnect Basics

Unidirectional Ring

Simple topology and implementation Reasonable performance if N and performance needs

(bandwidth & latency) still moderately low O(N) cost N/2 average hops; latency depends on utilization

22

R

0

R

1

R

N-2

R

N-1

2

2x2 router

Page 23: Interconnect Basics

Mesh O(N) cost Average latency: O(sqrt(N)) Easy to layout on-chip: regular and equal-length

links Path diversity: many ways to get from one node to

another

Used in Tilera 100-core And many on-chip network prototypes

23

Page 24: Interconnect Basics

Torus Mesh is not symmetric on edges: performance very

sensitive to placement of task on edge vs. middle Torus avoids this problem+ Higher path diversity (and bisection bandwidth)

than mesh- Higher cost- Harder to lay out on-chip - Unequal link lengths

24

Page 25: Interconnect Basics

Torus, continued Weave nodes to make inter-node latencies

~constant

25

SM P

SM P

SM P

SM P

SM P

SM P

SM P

SM P

Page 26: Interconnect Basics

Planar, hierarchical topologyLatency: O(logN)Good for local traffic+ Cheap: O(N) cost+ Easy to Layout- Root can become a bottleneck Fat trees avoid this problem (CM-5)

Trees

26

H-Tree

Fat Tree

Page 27: Interconnect Basics

CM-5 Fat Tree Fat tree based on 4x2 switches Randomized routing on the way up Combining, multicast, reduction operators

supported in hardware Thinking Machines Corp., “The Connection Machine

CM-5 Technical Summary,” Jan. 1992.

27

Page 28: Interconnect Basics

Hypercube

Latency: O(logN) Radix: O(logN) #links: O(NlogN)+ Low latency- Hard to lay out in 2D/3D

28

0-D 1-D 2-D 3-D 4-D

0000

0101

0100

0001

0011

0010

0110

0111

1000

1101

1100

1001

1011

1010

1110

1111

Page 29: Interconnect Basics

Caltech Cosmic Cube 64-node message passing

machine

Seitz, “The Cosmic Cube,” CACM 1985.

29

Page 30: Interconnect Basics

Handling Contention

Two packets trying to use the same link at the same time

What do you do? Buffer one Drop one Misroute one (deflection)

Tradeoffs?30

Page 31: Interconnect Basics

Destination

Bufferless Deflection Routing Key idea: Packets are never buffered in the network. When

two packets contend for the same link, one is deflected.1

311Baran, “On Distributed Communication Networks.” RAND Tech. Report., 1962 / IEEE Trans.Comm., 1964.

New traffic can be injectedwhenever there is a freeoutput link.

New traffic can be injectedwhenever there is a freeoutput link.

Page 32: Interconnect Basics

Bufferless Deflection Routing Input buffers are eliminated: flits are buffered in

pipeline latches and on network links

32

North

South

East

West

Local

North

South

East

West

Local

Deflection Routing Logic

Deflection Routing Logic

Input Buffers

Page 33: Interconnect Basics

Routing Algorithm Types

Deterministic: always chooses the same path for a communicating source-destination pair

Oblivious: chooses different paths, without considering network state

Adaptive: can choose different paths, adapting to the state of the network

How to adapt Local/global feedback Minimal or non-minimal paths

33

Page 34: Interconnect Basics

Deterministic Routing All packets between the same (source, dest) pair

take the same path

Dimension-order routing E.g., XY routing (used in Cray T3D, and many on-chip

networks) First traverse dimension X, then traverse dimension Y

+ Simple+ Deadlock freedom (no cycles in resource allocation)- Could lead to high contention- Does not exploit path diversity

34

Page 35: Interconnect Basics

Deadlock No forward progress Caused by circular dependencies on resources Each packet waits for a buffer occupied by another

packet downstream

35

Page 36: Interconnect Basics

Handling Deadlock Avoid cycles in routing

Dimension order routing Cannot build a circular dependency

Restrict the “turns” each packet can take

Avoid deadlock by adding more buffering (escape paths)

Detect and break deadlock Preemption of buffers

36

Page 37: Interconnect Basics

Turn Model to Avoid Deadlock Idea

Analyze directions in which packets can turn in the network Determine the cycles that such turns can form Prohibit just enough turns to break possible cycles

Glass and Ni, “The Turn Model for Adaptive Routing,” ISCA 1992.

37

Page 38: Interconnect Basics

Oblivious Routing: Valiant’s Algorithm An example of oblivious algorithm Goal: Balance network load Idea: Randomly choose an intermediate destination,

route to it first, then route from there to destination Between source-intermediate and intermediate-dest, can

use dimension order routing

+ Randomizes/balances network load- Non minimal (packet latency can increase)

Optimizations: Do this on high load Restrict the intermediate node to be close (in the same

quadrant)

38

Page 39: Interconnect Basics

Adaptive Routing Minimal adaptive

Router uses network state (e.g., downstream buffer occupancy) to pick which “productive” output port to send a packet to

Productive output port: port that gets the packet closer to its destination

+ Aware of local congestion- Minimality restricts achievable link utilization (load balance)

Non-minimal (fully) adaptive “Misroute” packets to non-productive output ports based

on network state+ Can achieve better network utilization and load balance- Need to guarantee livelock freedom

39

Page 40: Interconnect Basics

On-Chip Networks

40

R

PER

PER

PE

R

PER

PER

PE

R

PE

R

PE

R

PE

R Router

Processing Element(Cores, L2 Banks, Memory Controllers, etc)PE

• Connect cores, caches, memory controllers, etc– Buses and crossbars are not scalable

• Packet switched• 2D mesh: Most commonly used

topology• Primarily serve cache misses and

memory requests

Page 41: Interconnect Basics

Motivation for Efficient Interconnect In many-core chips, on-chip interconnect (NoC) consumes significant power

Intel Terascale: ~28% of chip power

Intel SCC: ~10%

MIT RAW: ~36%

Recent work1 uses bufferless deflection routing to reduce power and die area

41

Core L1

L2 Slice Router

1Moscibroda and Mutlu, “A Case for Bufferless Deflection Routing in On-Chip Networks.” ISCA 2009.


Recommended