+ All Categories
Home > Documents > Providing Performance Guarantees in Multipass Network Processors

Providing Performance Guarantees in Multipass Network Processors

Date post: 24-Feb-2016
Category:
Upload: tadeo
View: 30 times
Download: 0 times
Share this document with a friend
Description:
Providing Performance Guarantees in Multipass Network Processors. Network Processors (NPs). NPs used in routers for almost everything Forwarding Classification DPI Firewalling Traffic engineering Classical NP architectures  Homogeneous tasks  Increasingly heterogeneous demands - PowerPoint PPT Presentation
17
Providing Performance Guarantees in Multipass Network Processors Isaac Keslassy Kirill Kogan Gabriel Scalosub Michael Segal Technion Cisco & BGU (Ben-Gurion University) BGU BGU
Transcript
Page 1: Providing Performance Guarantees in  Multipass  Network Processors

Providing Performance Guarantees in Multipass Network Processors

Isaac Keslassy Kirill Kogan Gabriel Scalosub Michael SegalTechnion Cisco &

BGU (Ben-Gurion University)

BGU BGU

Page 2: Providing Performance Guarantees in  Multipass  Network Processors

2

Network Processors (NPs)• NPs used in routers for almost everything

– Forwarding– Classification– DPI– Firewalling– Traffic engineering

• Classical NP architecturesHomogeneous tasks

Increasingly heterogeneous demands• Examples: VPN encryption, LZS decompression, advanced QoS, …

• What are “classical NP architectures”?

Page 3: Providing Performance Guarantees in  Multipass  Network Processors

3

Pipelined NP ArchitectureEach packet gets processed by a pipeline of PPEs (packet

processing engines)– each PPE in charge of a different task

main issues: hard to scale, synchronous, packet header copy

PPE PPE PPE PPE PPE PPE

Packet Header Buffer

E.g., Xelerated X11 NP

Page 4: Providing Performance Guarantees in  Multipass  Network Processors

4

Parallel/Multi-Core NP ArchitectureEach packet is assigned to a single PPE

– single PPE performs all processing, in “run-to-completion”

main issue: run-to-completion heavy packets can starve light ones

PPE

PPE

PPE

Hybrid Architecture:

Pipeline + Multi-core

E.g., Cavium CN68XX NP

E.g., EZChip NP-4 NP

Page 5: Providing Performance Guarantees in  Multipass  Network Processors

5

Multipass NP Architecture• Packet processing divided into processing cycles• Packet headers recycled into the queue after each processing cycle Main benefits:

– More scalable– Asynchronous– No run-to-completion (heavy packets do not necessarily starve light ones)

Main issue : many degrees of freedom (more complex scheduling)

E.g., Cisco QuantumFlow NP

PPE

PPE

PPEPacket Header Buffer

Page 6: Providing Performance Guarantees in  Multipass  Network Processors

6

Scheduling in Multipass NP• Packets have heterogeneous demands

– Each packet might require a different number of cycles• Designer objective: guarantee minimum throughput

– Minimum number of processed packets per second• Problem: many degrees of freedom

– Buffer management: FIFO, priority queueing, …? Preemption upon full buffer?– For each packet: what PPEs? In what order?– Fairness?– No reordering?– Heterogeneous PPEs?

PPE

PPE

PPEPacket Header Buffer

Page 7: Providing Performance Guarantees in  Multipass  Network Processors

7

Assumptions• Focusing on buffer management policy

– Efficiently use the full buffer size (unit-size packets with slotted time)• Only 1 PPE

– Complex enough!• Each packet needs up to k passes

– Number of passes known for each packet– k used in analysis, not in algorithm

• Our goal: competitive worst-case throughput guarantee– For any input sequence σ, show that Throughput(σ) ≥ OPT(σ) / c– Arbitrary arrival sequences (adversarial…)

Page 8: Providing Performance Guarantees in  Multipass  Network Processors

8

PPE

Buffer Management Policies

224555

1

1

PPE

224555

1

1

1

PQ (Priority queueing)less work = higher priority

FIFO

1

Page 9: Providing Performance Guarantees in  Multipass  Network Processors

9

A Case for Preemption• Assume non-preemption

– when a packet arrives to a full buffer, it is dropped• FIFO lower bound

– simple traffic pattern: competitive ratio is (k)• PQ lower bound

– (much) more involved– also (k)

(OR, how bad can non-preemption be when buffer overflows?)

Matching O(k) upper bounds for both

Page 10: Providing Performance Guarantees in  Multipass  Network Processors

10

What If We Preempt?

PPE

224555

3

Example:

Page 11: Providing Performance Guarantees in  Multipass  Network Processors

11

What If We Preempt?• Preemption + PQ = Optimal

– PQ can serve as a benchmark for optimality

• Preemption + FIFO?– not optimal: (log k) lower bound– sublinear(k) upper bound: still open

Page 12: Providing Performance Guarantees in  Multipass  Network Processors

12

Are Preemptions Free?• New packets “cost” more than recycled packets

– Example: costly memory access and system updates (pointers, data-structures)

• Copying cost – each new packet admitted incurs a cost of [0,1)

• Objective:– maximize ( Throughput – Cost )

Page 13: Providing Performance Guarantees in  Multipass  Network Processors

13

Algorithm PQ

PPE

2245

1030

29

When buffer full, accept new packet if it needs less cycles than (worst packet / β)

13Example: β =2

Page 14: Providing Performance Guarantees in  Multipass  Network Processors

14

Algorithm PQ

• Competitive ratio: f(k, , )

• Gives some best for each value of k and

(1- ) (1 + log/(-1)(k/2) + log(k))

1- log(k)

Page 15: Providing Performance Guarantees in  Multipass  Network Processors

15

Simulation Results• Single PPE (C=1), copying cost =0.4

– ON-OFF bursty traffic PQ is NOT optimal anymore!

k

Page 16: Providing Performance Guarantees in  Multipass  Network Processors

16

Conclusion

• Multipass NP architecture model

• Preemptions help significantly

• If preemptions are not free, schedule optimality gets complicated

Page 17: Providing Performance Guarantees in  Multipass  Network Processors

Thank you.


Recommended