+ All Categories
Home > Documents > Network Algorithms, Lecture 1: Intro and Principles George Varghese, UCSD.

Network Algorithms, Lecture 1: Intro and Principles George Varghese, UCSD.

Date post: 25-Dec-2015
Category:
Upload: thomas-nelson
View: 219 times
Download: 3 times
Share this document with a friend
Popular Tags:
31
Network Algorithms, Lecture 1: Intro and Principles George Varghese, UCSD
Transcript

Network Algorithms, Lecture 1: Intro and Principles

George Varghese, UCSD

Course Goal

• Network Algorithms: Algorithms for key bottlenecks in a router (lookups, switching, QoS)

• How to attack new bottlenecks: Changing world -> measurement, security

• Algorithmics: Models and Principles. System thinking + Hardware + Algorithms

Different Backgrounds

• Balaji: Stochastic Processes, randomized algorithms, QCN, DCTCP, 2 yr at Cisco (Nuova)

• Tom: Former Cisco VP and Fellow. Architect of Cisco Catalyst and Nexus

• Me: Algorithmic background. DRR, IP lookups. 1 year at Cisco (NetSift)

• All worked on router bottlenecks. Want to reconcile our viewpoints. Fun for you

Course Grading• 4 Homeworks: 20%

– Posted Weds, due Thus by 2pm

• Course Project: 80%– Teams of 2 students– Ideally, original idea related to course leading to a paper;

or, a detailed review of paper(s) on a topic.– 1 page abstract due by Mon, Apr 18th – Final project presentation + 5 page report

• Office hours: – Balaji: Tuesday 10-11

-- George Wednesday 10-11• TA: Mohammed, introduction

Course Outline

• Lectures 1 to 2: Principles and Models• Lectures 3 to 5: Lookups• Lectures 6 to 9: Switching• Lectures 10 to 11: QoS• Later lectures: measurement, security, data

center switching fabrics, congestion control, etc

Plan for Rest of Lecture

• Part 1: Warm up example of algorithmic thinking

• Part 2: Ten Principles for Network Algorithms

Warm Up: Scenting an Attack

• Observation: Large frequency of uncommon characters within code buried in say URL.

• Goal: Flag such packets for further observation.

Strawman Solution

• Strawman: Increment character count and flag if any over threshold.

• Problem: Second pass over array after packet

Oliver Asks for Less

• Idea: Relax specification to alarm if count over threshold wrt current and not total length

• Problem: Corner cases for small lengths

Oliver uses Algorithmic Thinking

• Idea: Keep track of max relativized count and compare to length L at end.

• Problem: Still 2 Reads, 1 Write to Memory. 1 Read is free (parallelism, wide memories)

Relax Specification: Approximate!

• Idea: Round up thresholds to powers of 2 to use shifts. Have false positives anyway.

• Problem: Initialize counts per packet

Lazy initialization

• Idea: Use generation number and lazy count initialization.

• Problem: Generation number wrap (Scrub)

Part 2, Ten Principles for Network Algorithms

Summary

• P1: Relax Specification for efficiency• P2: Utilize degrees of freedom• P3: Shift Computation in Time• P4: Avoid Waste seen in a holistic view• P5: Add State for efficiency• P6: Exploit hardware parallelism and memories.• P7: Pass information (e.g., hints) in interfaces• P8: Use finite universe methods (bucket-sort etc)• P9: Use algorithmic thinking • P10: Optimize the Expected Case

P1: Relax Specifications

IntServ Spec: Serve smallest, requires sorting

DRR: Throughput fair, modify round robin, O(1)

1500

50

800

Router Output Queue Scheduling

Forms of relaxing specification

• Probabilistic: use randomization– Ethernet, RED

• Approximate: – RED,TCP Round trip weights

• Shift computation in space: Path MTU

R1E

4000 1500

R2

Fragment?

CalculateMin segmentSize

P2: Utilize degrees of Freedom

• Have you used all information (Polya)?• TCP: Reduce rate on drop; later, on ECN set• DCTCP: Use number of ECN bits set in window

R1E

10 G 1 G

R2

Pass ECN Bit

CalculateSend rate

P2 in IP Lookups

• Have you used all knobs under your control?

Unibit Trie: One bit at a timeP7 = 1000000* 5 Reads

Multiibit Trie: Many bits, 2 Reads

P2 in Algorithms (keyword search)

We hold these truths

truths

Last character firstBoyer Moore

P3:Shift Computation in Time

• Precompution: slow update, fast lookup• Lazy Evaluation: counter initialization• Expense Sharing (batching): timing wheels

PrecomputedPrefix of 100010

P7 = 100000*P6 = 10000*

P4: Avoid Waste

• When viewing system holistically• e.g: packet copies

Avoid waste

2 bits

3 bits

P5: Add State

• Those who do not learn from history . . . • BIC, QCN: Remember past high rate

Past high

Start here?

P6: Leverage Hardware

• Exploit Parallelism (memory and processing)• Leverage diff memories: SRAM, DRAM, EDRAM

00000001

00000101

00000011

Touched rarelyKeep in Slow memory

Touched oftenKeep in Fast memory

Large set of Counters

P7: Pass information across layers

• Do not hide information• Pass “hints”

Add index to list

Bypassing the Kernel in VIA

P8: Finite Universe Techniques

• Bit Maps: Saves space & time by HW parallelism• Bucket sorting: skipping over empty elements?

11 12 14

13Timer Queue

P8: Finite Universe Techniques

• Cost of skipping empty buckets shared with incrementing current time. (P2)

11

12

14

13Timer wheel

Current time = 10

P9: Use Algorithmic Ideas

• Not blindly reusing algorithms but using ideas• Binary Search on Hash Tables:

10000000*

11100000*01 * 101*

1 2 83 4

Start hereOr here?

P10: Optimize Expected Case

• Mostly worst case, sometimes expected case• Optimization should not be caught by testing!

D DD->M

E

Cache

Router

D, M

Instead of maintaining separate HW threads for cached and uncached, drop uncached!

Summary

• P1: Relax Specification for efficiency• P2: Utilize degrees of freedom• P3: Shift Computation in Time• P4: Avoid Waste seen in a holistic view• P5: Add State for efficiency• P6: Exploit hardware parallelism and memories.• P7: Pass information (e.g., hints) in interfaces• P8: Use finite universe methods (bucket-sort etc)• P9: Use algorithmic thinking • P10: Optimize the Expected Case

Summing up

• Course: router bottlenecks/network algorithms• Not just a recipe book but a way of thinking• Will see principles in action in later lectures• Next lecture: hardware models by Tom


Recommended