1 Today’s Objectives Announcements Hand in Homework #1 Homework 2 was posted on the Schedule last...

Post on 21-Jan-2016

214 views 0 download

Tags:

transcript

1

Today’s Objectives

Announcements• Hand in Homework #1• Homework 2 was posted on the Schedule last weekend, due on 28-Feb

Analysis tools – Chapter 4 (Section 4.3 will not be on the exam)• Measuring running time• Asymptotic analysis• Pseudocode and counting primitive operations• Simplified steps for analyzing algorithms• Best, worst, and average cases• Big-Oh, Big-Omega, Big Theta

Stacks and Queues – Chapter 5.1 and 5.2• Stack = a LIFO data structure• Queue = a FIFO data structure• Implementing a queue that holds data in a circular array

Week 5

2

Analysis Tools

3

Data Structures & Algorithms

Good data structures and algorithms are necessary for efficiency and speed in our applications

What makes a data structure or algorithm good? How do we measure them?

• Easy to write? Short? Fast? Efficient?

Analysis Tools (Goodrich, 108)

4

Data Structures & Algorithms

Good data structures and algorithms are necessary for efficiency and speed in our applications

What makes a data structure or algorithm good? How do we measure them?

• Easy to write? Short? Fast? Efficient?

The important measures:1. Fast running time2. Efficient space usage

Analysis Tools (Goodrich, 108)

5

Food for Thought

Scenario description

Analysis Tools (Handout; Eisner, email note)

6

Running Time

The most important measure Absolute running time is dependent on the

computer, the language, the compiler, the operating system, etc.

Increases when the input size (n) increases Evaluating running time

• One possible approach is to measure the running time experimentally

• Use the same platform when testing different algorithms

• Use inputs of different sizes

Analysis Tools (Goodrich, 109–110)

7

Measuring Running Time Experimentally

Test with inputs of different sizes Plot the running time vs. input size (n)

RunningTimef(n)

Input Size (n)0

Analysis Tools (Goodrich, 109–110)

8

Experimental Results 1

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

0.40

0.45

0.50

10,000,000 12,000,000 14,000,000 16,000,000

n

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

0.40

0.45

0.50

10,000,000 12,000,000 14,000,000 16,000,000

n

f(n) in seconds

Pentium III, 866 MHz Pentium 4, 2.4 GHz

Testing push() with ArrayStack

Analysis Tools

9

Lessons Learned 1

We cannot provide an exact running time for the algorithm because the results are different on different computers

The running time increases as the input size increases

Analysis Tools

10

Experimental Results 2

0.00

1.00

2.00

3.00

4.00

5.00

6.00

7.00

8.00

10,000,000 12,000,000 14,000,000 16,000,000

n

0.00

1.00

2.00

3.00

4.00

5.00

6.00

7.00

8.00

10,000,000 12,000,000 14,000,000 16,000,000

n

f(n) in seconds

Pentium III, 866 MHz Pentium 4, 2.4 GHz

Testing push() with NodeStack and ArrayStack

Analysis Tools

11

Lessons Learned 2

The data structure that we choose for our algorithm can have a big influence on the algorithm’s speed

A stack implemented with an array is much faster than a stack implemented with a linked list of Nodes

Analysis Tools

12

Drawbacks ofExperimental Measurement

Results may be different for input sets that were not tested

Tests may not have used the same hardware or software, so we may not be able to make good comparisons

We have to code the algorithm before we can test it

Analysis Tools (Goodrich, 110)

13

General Analysis Method

Advantages• All possible inputs are considered• Characterizes running time as a function of the

input size n• Independent of hardware or software

environment• Can be used to evaluate an algorithm without

having to code it

Find a function f that describes the algorithm’s running time as a function of the input size n

Analysis Tools (Goodrich, 110)

14

Asymptotic Analysis

Developed by computer scientists for analyzing the running time of algorithms

Describes the running time of an algorithm in terms of the size of the input – in other words, how well does it “scale” as the input gets larger

The most common metric used is:• Upper bound of an algorithm• Called the Big-Oh of an algorithm, O(n)

Analysis Tools (Goodrich, 166)

15

Algorithm Analysis

Use asymptotic analysis Assume that we are using an idealized

computer called the Random Access Machine (RAM)• CPU• Unlimited amount of memory• Accessing a memory cell takes one unit of

time• Primitive operations take one unit of time

Perform the analysis on the pseudocode

Analysis Tools (Goodrich web page)

16

Pseudocode

High-level description of an algorithm Structured Not as detailed as a program Easier to understand (written for a human)

Algorithm arrayMax(A,n):Input: An array A storing n >= 1 integers.Output: The maximum element in A.

currentMax A[0]for i 1 to n – 1 do if currentMax < A[i] then currentMax A[i]return currentMax

Analysis Tools (Goodrich, 48, 166)

17

Pseudocode Details

DeclarationAlgorithm functionName(arg,…)

Input:…Output:…

Control flow• if…then…[else…]• while…do…• repeat…until…• for…do…Indentation instead of braces

Return valuereturn expression

Expressions Assignment (like in Java) Equality testing (like in Java)

Analysis Tools (Goodrich, 48)

18

Pseudocode Details

DeclarationAlgorithm functionName(arg,…)

Input:…Output:…

Control flow• if…then…[else…]• while…do…• repeat…until…• for…do…Indentation instead of braces

Return valuereturn expression

Expressions Assignment (like in Java) Equality testing (like in Java)

Analysis Tools (Goodrich, 48)

Tips for writing an algorithm

Use the correct data structureUse the correct ADT operationsUse object-oriented syntaxIndent clearlyUse a return statement at the end

Tips for writing an algorithm

Use the correct data structureUse the correct ADT operationsUse object-oriented syntaxIndent clearlyUse a return statement at the end

19

Primitive Operations

To do asymptotic analysis, we can start by counting the primitive operations in an algorithm and adding them up

Primitive operations that we assume will take a constant amount of time:• Assigning a value to a variable• Calling a function• Performing an arithmetic operation• Comparing two numbers• Indexing into an array• Returning from a function• Following a pointer reference

Analysis Tools (Goodrich, 164–165)

20

Counting Primitive Operations

Inspect the pseudocode to count the primitive operations as a function of the input size (n)Algorithm arrayMax(A,n):

currentMax A[0]

for i 1 to n – 1 do

if currentMax < A[i] then currentMax A[i]

return currentMax

CountArray indexing + Assignment 2

Initializing i 1Verifying i<n n

Array indexing + Comparing 2(n-1)Array indexing + Assignment 2(n-1)worstIncrementing the counter 2(n-1)Returning 1

Best case: 2+1+n+4(n–1)+1 = 5n Worst case: 2+1+n+6(n–1)+1 = 7n-2

Analysis Tools (Goodrich, 166)

21

Best, Worst, or Average Case

Fig. 4.4, Goodrich, p. 165

A program may run faster on some input data than on others

Best case, worst case, and average case are terms that characterize the input data

• Best case – the data is distributed so that the algorithm runs fastest

• Worst case – the data distribution causes the slowest running time

• Average case – very difficult to calculate

We will concentrate on analyzing algorithms by identifying the running time for the worst case data

Analysis Tools (Goodrich, 165)

22

Estimating theRunning Time of arrayMax

Worst case running time of arrayMax

f(n) = (7n – 2) primitive operations

Actual running time depends on the speed of the primitive operations—some of them are faster than others

• Let t = speed of the slowest primitive operation

• Let f(n) = the worst-case running time of arrayMaxf(n) = t (7n – 2)

Analysis Tools (Goodrich, 166)

23

Growth Rate of arrayMax

Growth rate of arrayMax is linear

Changing the hardware alters the value of t, so that arrayMax will run faster on a faster computer

Growth rate is still linear

0

100

200

300

400

500

600

0 2 4 6 8n

f(n)

Slow PC10(7n-2)

Fast PC5(7n-2)

Fastest PC1(7n-2)

Analysis Tools (Goodrich, 166)

24

Tests of arrayMax

Does it really work that way?

Used arrayMax algorithm from p.166

Tested on two PCs

Yes, results both show a linear growth rate

0.00

0.20

0.40

0.60

0.80

1.00

1.20

1.40

0 10,000,000 20,000,000 30,000,000 40,000,000 50,000,000

n

f(n)(sec.)

Pentium 4, 2.4 GHz

Pentium III, 866 MHz

Analysis Tools

25

Big-Oh

Developed by computer scientists for analyzing the running time of algorithms

Describes the running time in terms of the size of the input n

• In other words, how well does it “scale” as the input gets very large

Upper bound of the growth of an algorithm’s running time

Algorithm Analysis (Goodrich, 167)

26

Growth Rates ofCommon Classes of Functions

Input size n

Running timef(n)

QuadraticO(n2)

ExponentialO(2n)

LogarithmicO(log n)

LinearO(n)

ConstantO(1)

Analysis Tools (Goodrich, 161)

27

Growth Rate Examples

Towers of Hanoi (Goodrich, 198)ExponentialO(2n)

Bubble sortQuadraticO(n2)

Pushing a collection of elements onto a stack

LinearO(n)

Binary searchLogarithmicO(log n)

Accessing an element of an arrayConstantO(1)

ExampleEfficiency

Analysis Tools

28

The Importance ofGrowth Rates

Quadratic

O(n2)

Linear

O(n)

Logarithmic

O(logn)

Constant

O(1)n

1.11 hr0.004 sec9.30 nsec0.4 nsec10,000,000

4.0 sec0.04 msec6.64 nsec0.4 nsec100,000

0.4 msec0.4 sec3.99 nsec0.4 nsec1,000

40.0 nsec4.0 nsec1.33 nsec0.4 nsec10

Assuming that it takes 0.4 nsec to process one element

Analysis Tools (Goodrich, 1719)

29

Big-Oh Notation

Definition:Let f(n) and g(n) be functions mapping nonnegative integers to real numbers.

f(n) is O(g(n)) if there is a real constant c>0and an integer constant n01, such that f(n) cg(n) for every integer nn0

Analysis Tools (Goodrich, 167)

30

0

10

20

30

40

50

60

0 1 2 3 4 5 6 7 8

n

Big-Oh Example #1

Running time of arrayMax is O(n).7n – 2 is O(n)

n

7n-2

Analysis Tools

31

0

10

20

30

40

50

60

0 1 2 3 4 5 6 7 8

n

Big-Oh Example #1

Running time of arrayMax is O(n).7n – 2 is O(n)

n

7n-2

Justify using the Big-Oh definition by identifying constants c and n0.

Analysis Tools

32

0

10

20

30

40

50

60

0 1 2 3 4 5 6 7 8

n

Big-Oh Example #1

Running time of arrayMax is O(n).7n – 2 is O(n)

n

7n-21. Real constant c

• such that f(n) cg(n)Let• f(n) = 7n – 2• g(n) = n

Justify using the Big-Oh definition by identifying constants c and n0.

Analysis Tools

33

0

10

20

30

40

50

60

0 1 2 3 4 5 6 7 8

n

Big-Oh Example #1

Running time of arrayMax is O(n).7n – 2 is O(n)

n

7n-21. Real constant c

• such that f(n) cg(n)Let• f(n) = 7n – 2• g(n) = n

Justify using the Big-Oh definition by identifying constants c and n0.

g(n) = n

f(n) = 7n-2

Analysis Tools

34

0

10

20

30

40

50

60

0 1 2 3 4 5 6 7 8

n

Big-Oh Example #1

Running time of arrayMax is O(n).7n – 2 is O(n)

n

7n-2

0

10

20

30

40

50

60

0 1 2 3 4 5 6 7 8

n

cg(n) = cnwhere c=7

If we pick c = 7, then 7n – 2 cn is satisfied, so f(n) cg(n) is true.

In other words, f(n) is proportional to g(n) and the value of f(n) will never exceed cg(n).

1. Real constant c• such that f(n) cg(n)Let• f(n) = 7n – 2• g(n) = n

Justify using the Big-Oh definition by identifying constants c and n0.

g(n) = n

f(n) = 7n-2

Analysis Tools

35

0

10

20

30

40

50

60

0 1 2 3 4 5 6 7 8

n

0

10

20

30

40

50

60

0 1 2 3 4 5 6 7 8

n

Big-Oh Example #1 (Cont.)

• Integer constant n0• n0 1• f(n) cg(n) for all n n0

n

7n-2

cg(n) = cnwhere c=7

g(n) = n

f(n) = 7n-2

Analysis Tools

36

0

10

20

30

40

50

60

0 1 2 3 4 5 6 7 8

n

0

10

20

30

40

50

60

0 1 2 3 4 5 6 7 8

n

Big-Oh Example #1 (Cont.)

• Integer constant n0• n0 1• f(n) cg(n) for all n n0

n

7n-2

cg(n) = cnwhere c=7

In other words, f(n) is proportional to g(n) only when n is big enough.

“Big enough” is defined by n0.

n0 = 1

If we pick n0 = 1, then f(n) cg(n) is true for all integers n n0

g(n) = n

f(n) = 7n-2

Analysis Tools

37

Big-Oh Example #2

2n + 10 is O(n)

Big-Oh definition: Given functions f(n) and g(n), we say f(n) is O(g(n)) if there are positive constants c and n0 such that f(n) cg(n) for n > n0

let f(n) = 2n + 10let g(n) = n2n + 10 cn10 n(c – 2)10/(c – 2) npick c = 3 and n0 = 10

0

20

40

60

80

100

120

140

160

0 10 20 30 40 50n

g(n) = n

f(n) = 2n+10

cg(n) = cnwhere c=3

n0 = 10

Analysis Tools

38

More About Asymptotic Analysis

Simplify the count to get the Big-Oh:

1. Drop all lower-order terms7n – 2 7n

1. Eliminate constants7n n

1. Remaining term is the Big-Oh7n – 2 is O(n)

Analysis Tools (Goodrich, 168–169)

39

Another Example

Example: f(n) = 5n3 – 2n2 + 1

1. Drop all lower order terms

5n3 – 2n2 + 1 5n3

2. Eliminate the constants

5n3 n3

3. The remaining term is the Big-Oh

f(n) is O(n3)

Analysis Tools (Goodrich, 168–169)

40

Using the Big-Oh Notation

The correct way to use Big-Oh notation is to say f(n) is O(g(n))

• NOT: f(n) O(g(n))• NOT: f(n) = O(g(n))

Since Big-Oh describes the upper bound, we can say that 7n – 2 is O(n2)

It is also true if we say that 7n – 2 is O(7n – 2) However, both of these statements are not

considered accurate It is customary to pick the smallest order when

describing Big-Oh

Analysis Tools (Goodrich, 168–169)

41

Rules for Simplifying

The most important rules:• We can drop the constants• In a polynomial, the term with the highest

degree establishes the Big-Oh• The sum rule: for a sequential loops add

their Big-Oh values• For nested loops, multiply their Big-Oh

values

Analysis Tools

42

Finding the Big-Oh

We do not need to use the Big-Oh definition to find the Big-Oh

We do not need to count all the primitive operations We need to count only the operations that have

the greatest effect on growth rate of the running time – that usually means the loops.

In Big-Oh analysis, we focus on the “big picture”

for( int i=0; i<n; ++i ) { myArray[i] = 0;}

Analysis Tools

43

Steps for Analyzing an Algorithm

1. Locate the loopsa) Determine the Big-Oh by counting the number of

iterations in terms of nb) Sequential loops – Add their Big-Oh valuesc) Nested loops – Start analyzing with the inner loop,

multiply their Big-Oh values

2. Are there branch statements (e.g., if…else)?• Use the runtime of the biggest branch

3. If the result is a polynomial, the term with the highest degree establishes the Big-Oh

Algorithm Analysis

44

Finding the Big-Oh

for( int i=0; i<n; ++i ) { myArray[i] = 0;}

for( int i=0; i<n; ++i ) { for(int j=0; j<n; ++j) { myArray[i][j] = 0; }}

Algorithm Analysis

45

Finding the Big-Oh

sum = 0;for( int i=1; i<=n; ++i ) { for(int j=1; j<=i; ++j) { ++sum; }}

Analysis Tools

46

PrefixAverages Example

The problem:• Given array X that stores numbers• Compute the numbers in Array A• Each number A[i] is the average of the numbers in

X from X[0] to X[i]

Algorithm Analysis (Goodrich, 174)

1 3 5 7 9

1 2 3 4 5

X

A

X[0] + X[1] + X[2] = 9

9 / 3 = 3

47

Quadratic Time Solution

Algorithm prefixAverages1…for i 0 to n-1 do a 0 for j 0 to i do a a + X[j] A[i] a/(i+1)…

Two nested loops

Algorithm Analysis (Goodrich, 174)

Inner loop – loopsthrough X, adding the numbers from element 0 through element i

Outer loop – loopsthrough A, calculatingthe averages and putting the resultinto A[i]

48

Algorithm prefixAverages2…s 0for i 0 to n-1 do s s + X[i] A[i] s/(i+1)…

Only one loop

Linear Time SolutionAlgorithm Analysis (Goodrich, 175)

Sum – keeps track of thesum of the numbers in Xso that we don’t have to loop through X every time

Loop – loopsthrough A, addingto the sum, calculatingthe averages, and putting the resultinto A[i]

49

1. Both algorithms correctly solved the problem

• Lesson – There may be more than one way to write your program.

2. One of the algorithms was significantly faster

• Lesson – The algorithm that we choose can have a big influence on the program’s speed.

Evaluate the solution that you pick, and ask whether it is the most efficient way to do it.

Lessons LearnedAlgorithm Analysis

50

Best, Worst, and Average Cases

Fig. 4.4, Goodrich, p. 165 A program may run faster on some inputs than on

others• Best case = the data allows the algorithm to work as fast as it

can• Worst case = the data causes the algorithm to perform poorly• Not the same as upper bound and lower bound

Algorithms can have different complexities, depending on the data that they work with

Examples:• “The running time of this algorithm is O(nlogn) in the best

case”• “The running time of this algorithm is O(n2) in the worst case”

Algorithm Analysis (Goodrich, 122)

51

“Relatives” of Big-Oh

Big-Ohf(n) is O(g(n)) if f(n) cg(n) for every integer nn0

Upper Bound – f(n) is less than or equal to cg(n)

Big-Omegaf(n) is (g(n)) if f(n) cg(n) for every integer nn0

Lower Bound – f(n) is greater than or equal to cg(n)

Big-Thetaf(n) is (g(n))

if c'g(n) f(n) c''g(n) for every integer nn0

– Two functions are asymptotically equal.– Exact specification of the growth of runtime

Algorithm Analysis (Goodrich, 170)

52

“Relatives” of Big-Oh Example

f(n) = 2n + 10

g(n) = n

f(n) c''g(n) when c'' = 3Upper Bound is O(n)

0

20

40

60

80

100

120

140

160

0 10 20 30 40 50n

Algorithm Analysis

0

20

40

60

80

100

120

140

160

0 10 20 30 40 50n

c'g(n) = c'nwhere c'=2

f(n) c'g(n) when c' = 2Lower Bound is (n)

Since f(n) is both O(n) and (n), then f(n) is (n)

c''g(n) = c''nwhere c''=3

g(n) = n

f(n) = 2n+10

n0 = 10

53

Big-Omega

Big-Omegaf(n) is (g(n)) if f(n) cg(n) for every integer nn0

Lower Bound – f(n) is greater than or equal to cg(n)

We are usually only interested in the lower bound for a given problem, not a specific algorithm

• If we can prove that there is a bound where the performance cannot improve, then we can stop seeking a better algorithm

• Example– Problem: “How fast is it possible to sort?”– Answer: The lower bound for sorting algorithms that rely on

comparing key values is (nlogn)– Since the Bubble Sort is O(n2), we know we can do better– Insertion Sort is O(nlogn) in the best case, but still O(n2) in worst case– Heap Sort is O(nlogn) in the worst case, so we know we cannot get

better performance from a sort that compares key values

Analysis Tools (Goodrich, 170, Sedgewick, 62)

54

More “Relatives” of Big-Oh

little-oh• f(n) is o(g(n)) if f(n) is strictly less than g(n)• In other words, f(n) is not (g(n))

little-omega• f(n) is (g(n)) if f(n) is strictly greater than g(n)• In other words, f(n) is not (g(n))

Not used very much

Algorithm Analysis

55

Stack

LIFO data structure

56

Stack

Container that stores data elements that are inserted or removed in LIFO order (last-in first-out)

• How are the data related?– Sequential (sequence is determined by the

insertion order)

• Operations– push, pop, top, size, isEmpty

Stacks (Goodrich, 188)

57

Some Uses of Stacks

BacktrackingExamples• Storing addresses in a Web browser’s history• Implementing an “undo” feature by storing changes• The run-time stack

Reversing dataExample• Solving the palindrome problem

ParsingExamples• Check for matching brackets in a mathematical expression, e.g.

“((3n+1)*4)”• Check whether the elements in an XML document are properly nested

Stacks (Goodrich, 188)

58

Efficiency of a Stack

When implemented as an array:

Stacks (Goodrich, 194–195)

O(1)E temp = myData[topIndex];topIndex--;return temp;

E pop();

O(1)myData[topIndex] = e;topIndex++;

void push(E e);

O(1)return myData[topIndex];E top();

O(1)return (topIndex < 0);boolean isEmpty();

O(1)return (topIndex + 1);int size();

Big-OhImplementationMethod

59

The Run-Time Stack

An important use for a stack

Every Java program has its own stack called the run-time stack

The run-time stack is used to keep track of the details when a function is invoked

• Details are stored in a “stack frame” that is pushed onto the run-time stack

• When the function finishes, the stack frame is popped off the run-time stack and execution “backtracks”

Stacks

60

Run-Time Stack Example

1. Program execution in main() proceeds one instruction at a time

2. When a function call is made main() is suspended, and program

execution goes to the function A “stack frame” is pushed on the

run-time stack to keep track of– Arguments passed to the function– Local variables in main– The return location– Return variables

3. When the function is finished• The stack frame is popped off the

run-time stack• Its contents are used to return to

the previous state in main

1 public class RuntimeStackExample{

2 public static void main(String args[]){

3 int z=0,x=0;

4 int y=1;

5 x += y;

6 y += x + 1;

7 z = max(x,y);

8 System.out.println(z);

9 }

10 static int max(int a, int b) {

11 return (a > b ? a : b);

12 }

13 }

Stacks

2

1

3

Run-timestack

Arg: 1, 3Var: x, y, zRet to: line 7Ret val: 3

61

Queue

FIFO data structure

62

Queue

Container that stores data elements that are inserted or removed in FIFO order (first-in first-out)

(Like a line of people waiting for a theater ticket.)

• How are the data related?– Sequential (sequence is determined by the insertion order)

• Operations– enqueue = inserts an element at the rear of a queue– dequeue = removes an element from the front of a queue– front = returns a reference to the element at the front– size– isEmpty

Queues (Goodrich, 204)

63

Front Rear

Behavior of the Queue

q.enqueue(5); 5

Queues (Goodrich, 205)

Front Rear5

Front Rear

Front Rear3

Front Rear

q.enqueue(3);

q.enqueue(9);

q.dequeue();

q.enqueue(7);

3

5 3 9

9

3 9 7

Queue<Integer> q = new ArrayQueue<Integer>(5);

64

Some Uses of Queues

How do operating systems use queues?Holding jobs in order until they can be servicedExamples

– Print job spooler– Process scheduling

How do customer service applications use queues?Holding customer transactions in the order that they were submitted until they can be processedExample

– Holding transactions in an online airline reservation system in the order that they were submitted

How do simulations use queues?Can be used in a computerized model to simulate the arrival of customers and the customer processing timeExample

– Modeling a grocery business to decide whether to expand the number of checkout lanes

Queues can be used to categorize dataRearranging data without destroying the sequenceExample

– Separating runners into age groups at the finish line, without destroying their order of arrival

Queues (Goodrich, 205)

65

Categorizing Data

Using a queue to rearrange data without destroying their sequence

Queues

Front

Finish LineFinish Line

Age 18–29

Front

Age 30–39

Front

Age 40–49

//Other age groups…

66

Array Implementation of a Queue

f r

0 1 2 3 4 5 6 7 8 9int f=0, r=0;//empty when f==r

Queues (Goodrich, 206–209)

Queue<Integer> q = new ArrayQueue<Integer>(10);

67

Array Implementation of a Queue

f r

q.enqueue(5); 5

f r

0 1 2 3 4 5 6 7 8 9int f=0, r=0;//empty when f==r

Q[r]=5;++r;

Queues (Goodrich, 206–209)

Queue<Integer> q = new ArrayQueue<Integer>(10);

68

Array Implementation of a Queue

f r

q.enqueue(5);

q.enqueue(3);

5

f r

35

f r

0 1 2 3 4 5 6 7 8 9int f=0, r=0;//empty when f==r

Q[r]=5;++r;

Q[r]=3;++r;

Queues (Goodrich, 206–209)

Queue<Integer> q = new ArrayQueue<Integer>(10);

69

Array Implementation of a Queue

f r

q.enqueue(5);

q.enqueue(3);

q.enqueue(9);

5

f r

35

f r

935

f r

0 1 2 3 4 5 6 7 8 9int f=0, r=0;//empty when f==r

Q[r]=5;++r;

Q[r]=3;++r;

Q[r]=9;++r;

Queues (Goodrich, 206–209)

Queue<Integer> q = new ArrayQueue<Integer>(10);

70

Array Implementation of a Queue

f r

q.enqueue(5);

q.enqueue(3);

q.enqueue(9);

q.dequeue();

5

f r

35

f r

935

f r

935

f r

0 1 2 3 4 5 6 7 8 9int f=0, r=0;//empty when f==r

Q[r]=5;++r;

Q[r]=3;++r;

Q[r]=9;++r;

int temp = Q[f];++f;return temp;

Queues (Goodrich, 206–209)

Queue<Integer> q = new ArrayQueue<Integer>(10);

71

Array Implementation of a Queue

041267935

f r

q.enqueue(0);

0 1 2 3 4 5 6 7 8 9Q[r]=0;r = (r+1) % cap; //cap=10

Queues (Goodrich, 206–209)

72

Array Implementation of a Queue

041267935

f r

q.enqueue(0);

q.enqueue(8); 8041267935

fr

0 1 2 3 4 5 6 7 8 9Q[r]=0;r = (r+1) % cap; //cap=10

Q[r]=8;r = (r+1) % cap;

Queues (Goodrich, 206–209)

73

Array Implementation of a Queue

041267935

f r

q.enqueue(0);

q.enqueue(8);

q.enqueue(1);

8041267935

fr

8041267931

fr

0 1 2 3 4 5 6 7 8 9Q[r]=0;r = (r+1) % cap; //cap=10

Q[r]=8;r = (r+1) % cap;

Q[r]=1;r = (r+1) % cap;

Queues (Goodrich, 206–209)

74

Array Implementation of a Queue

041267935

f r

q.enqueue(0);

q.enqueue(8);

q.enqueue(1);

q.enqueue(0);

q.dequeue();

8041267935

fr

8041267931

fr

8041267901

fr

8041267901

fr

0 1 2 3 4 5 6 7 8 9Q[r]=0;r = (r+1) % cap; //cap=10

Q[r]=8;r = (r+1) % cap;

Q[r]=1;r = (r+1) % cap;

Q[r]=0;r = (r+1) % cap;

temp = Q[f];f = (f+1) % cap;return temp;

Queues (Goodrich, 206–209)

75

Array Implementation of a Queue

041267935

f r

q.size();

0 1 2 3 4 5 6 7 8 9

return (capacity – f + r) % capacity;

Queues (Goodrich, 206–209)

76

Array Implementation of a Queue

8041267901

fr

q.size();

0 1 2 3 4 5 6 7 8 9

return ?

Queues (Goodrich, 206–209)

77

Array Implementation of a Queue

Can we enqueue another int?

Why or why not?

How do we test the queue to find out if there is space available?

8041267931

fr

0 1 2 3 4 5 6 7 8 9

Queues (Goodrich, 206–209)

78

Efficiency of a Queue

When implemented as an array:

O(1)int temp = Q[f];f = (f+1) % capacity;return temp;

E dequeue();

O(1)Q[r] = e;r = (r+1) % capacity;

void enqueue(E e);

O(1)return Q[f];E first();

O(1)return (f == r);boolean isEmpty();

O(1)return (capacity–f+r)%capacity;int size();

Big-OhImplementationMethod

Queues (Goodrich, 206–209)

79

Queue Problem

q.isEmpty()q.enqueue(9)q.enqueue(2)q.dequeue()q.enqueue(8)q.enqueue(4)q.enqueue(5)q.size()q.enqueue(6)q.dequeue()q.front()q.enqueue(1)q.size()

Queues

0 1 2 3 4 f r returnQueue<Integer> q = new ArrayQueue<Integer>(5);

80

References

Eisner, Jason, Johns Hopkins University, Baltimore, MD.

Goodrich, M. T. and R. Tamassia, Data Structures and Algorithms in Java. Hoboken, NJ: John Wiley & Sons, Inc., 2006.

Sedgewick, R., Algorithms in C++, Third Edition. Boston: Addison-Wesley, 1998.