+ All Categories
Home > Documents > 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations...

4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations...

Date post: 21-Dec-2015
Category:
View: 219 times
Download: 3 times
Share this document with a friend
52
4/18/08 Prof. Hilfinger CS 164 Lecture 3 4 1 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)
Transcript
Page 1: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 1

Other Forms of Intermediate Code. Local Optimizations

Lecture 34(Adapted from notes by R. Bodik and G.

Necula)

Page 2: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 2

Administrative

• HW #5 is now on-line. Due next Friday.

• If your test grade is not glookupable, please tell us.

• Please submit test regrading pleas to the TAs.

Page 3: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 3

Code Generation Summary

• We have discussed– Runtime organization– Simple stack machine code generation

• So far, compiler goes directly from AST to assembly language, and does not perform optimizations,

• Whereas most real compilers use an intermediate language (IL), which they later convert to assembly or machine language.

Page 4: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 4

Why Intermediate Languages ?

• Slightly higher-level target simplifies translation of AST Code

• IL can be sufficiently machine-independent to allow multiple backends (translators from IL to machine code) for different machines, which cuts down on labor of porting a compiler.

Page 5: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 5

Intermediate Languages and Optimization

• When to perform optimizations– On AST

• Pro: Machine independent• Cons: Too high level

– On assembly language• Pro: Exposes optimization opportunities• Cons: Machine dependent• Cons: Must reimplement optimizations when retargetting

– On an intermediate language• Pro: Machine independent• Pro: Exposes optimization opportunities• Cons: One more language to worry about

Page 6: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 6

Intermediate Languages

• Each compiler uses its own intermediate language

• Intermediate language = high-level assembly language– Uses register names, but has an unlimited number

– Uses control structures like assembly language– Uses opcodes but some are higher level

• E.g., push translates to several assembly instructions

• Most opcodes correspond directly to assembly opcodes

Page 7: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 7

An Intermediate Language

P S P | S id := id op id | id := op id | id := id | id := *id | *id := id | param id | call id | return [ id ] | if id relop id goto L

| L: | goto L

• id’s are register names

• Constants can replace id’s on right-hand sides

• Typical operators: +, -, *

• param, call, return are high-level; refer to calling conventions on given machine.

Page 8: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 8

An Intermediate Language (II)

• This style often called three-address code Typical instruction has three operands as in

x := y op z and y and z can be only registers or constants, much like assembly.

• The AST expression x + y * z is translated as t1 := y * z

t2 := x + t1

– Each subexpression has a “home” in a temporary

Page 9: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 9

Generating Intermediate Code

• Similar to assembly code generation• Major difference: Use any number of IL registers to hold intermediate results

• Problem of mapping these IL registers to real ones is for later parts of the compiler.

Page 10: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 10

Generating Intermediate Code (Cont.)

• Igen(e, t) function generates code to compute the value of e in register t

• Example:igen(e1 + e2, t) =

igen(e1, t1) (t1, t2 are fresh registers)

igen(e2, t2)

t := t1 + t2 (means “Emit code ‘t := t1 + t2’ ”)

• Unlimited number of registers simple code generation

Page 11: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 11

IL for Array Access

• Consider a one-dimensional array. Elements laid out adjacent to each other, each of size S

• To access array: igen(e1[e2], t) =

igen(e1, t1)

igen(e2, t2)

t3 := t2 * S

t4 := t1 + t3

t := *t4

Assumes e1 evaluates to array address.

Each ti denotes a new IL register

Page 12: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 12

Multi-dimensional Arrays

• A 2D array is a 1D array of 1D arrays• Java uses arrays of pointers to arrays for >1D arrays.

• But if row size constant, for faster access and compactness, may prefer to represent an MxN array as a 1D array of 1D rows (not pointers to rows): row-major order

• FORTRAN layout is 1D array of 1D columns: column-major order.

Page 13: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 13

IL for 2D Arrays (Row-Major Order)

• Again, let S be size of one element, so that a row of length N has size NxS.

igen(e1[e2,e3], t) =

igen(e1, t1); igen(e2,t2); igen(e3,t3)

igen(N, t4) (N need not be constant)

t5 := t4 * t2; t6 := t5 + t3;

t7 := t6*S;

t8 := t7 + t1

t := *t8

Page 14: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 14

Array Descriptors

• Calculation of element address for e1[e2,e3] has form VO + S1 x e2 + S2 x e3, where– VO (address of e1[0,0]) is the virtual origin

– S1 and S2 are strides

– All three of these are constant throughout lifetime of array

• Common to package these up into an array descriptor, which can be passed in lieu of the array itself.

Page 15: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 15

Array Descriptors (II)

• By judicious choice of descriptor values, can make the same formula work for different kinds of array.

• For example, if lower bounds of indices are 1 rather than 0, must compute

address of e[1,1] + S1 x (e2-1) + S2 x (e3-1)

• But some algebra puts this into the form VO + S1 x e2 + S2 x e3

where VO = address of e[1,1] - S1 - S2

Page 16: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 16

Observation

• These examples show profligate use of registers.

• Doesn’t matter, because this is Intermediate Code. Rely on later optimization stages to do the right thing.

Page 17: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 17

Code Optimization: Basic Concepts

Page 18: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 18

Definition. Basic Blocks

• A basic block is a maximal sequence of instructions with: – no labels (except at the first instruction), and

– no jumps (except in the last instruction)

• Idea: – Cannot jump in a basic block (except at beginning)

– Cannot jump out of a basic block (except at end)

– Each instruction in a basic block is executed after all the preceding instructions have been executed

Page 19: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 19

Basic Block Example

• Consider the basic block1. L: 2. t := 2 * x3. w := t + x4. if w > 0 goto L’

• No way for (3) to be executed without (2) having been executed right before– We can change (3) to w := 3 * x– Can we eliminate (2) as well?

Page 20: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 20

Definition. Control-Flow Graphs

• A control-flow graph is a directed graph with– Basic blocks as nodes– An edge from block A to block B if the execution can flow from the last instruction in A to the first instruction in B

– E.g., the last instruction in A is jump LB

– E.g., the execution can fall-through from block A to block B

• Frequently abbreviated as CFG

Page 21: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 21

Control-Flow Graphs. Example.

• The body of a method (or procedure) can be represented as a control-flow graph

• There is one initial node

• All “return” nodes are terminal

x := 1i := 1

L: x := x * x i := i + 1 if i < 10 goto L

Page 22: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 22

Optimization Overview

• Optimization seeks to improve a program’s utilization of some resource– Execution time (most often)– Code size– Network messages sent– Battery power used, etc.

• Optimization should not alter what the program computes– The answer must still be the same

Page 23: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 23

A Classification of Optimizations

• For languages like C and Cool there are three granularities of optimizations1. Local optimizations

• Apply to a basic block in isolation

2. Global optimizations• Apply to a control-flow graph (method body) in

isolation

3. Inter-procedural optimizations• Apply across method boundaries

• Most compilers do (1), many do (2) and very few do (3)

Page 24: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 24

Cost of Optimizations

• In practice, a conscious decision is made not to implement the fanciest optimization known

• Why?– Some optimizations are hard to implement– Some optimizations are costly in terms of compilation time

– The fancy optimizations are both hard and costly

• The goal: maximum improvement with minimum of cost

Page 25: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 25

Local Optimizations

• The simplest form of optimizations• No need to analyze the whole procedure body– Just the basic block in question

• Example: algebraic simplification

Page 26: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 26

Algebraic Simplification

• Some statements can be deletedx := x + 0x := x * 1

• Some statements can be simplified x := x * 0 x := 0 y := y ** 2 y := y * y x := x * 8 x := x << 3 x := x * 15 t := x << 4; x := t - x(on some machines << is faster than *; but not on all!)

Page 27: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 27

Constant Folding

• Operations on constants can be computed at compile time

• In general, if there is a statement x := y op z– And y and z are constants– Then y op z can be computed at compile time

• Example: x := 2 + 2 x := 4• Example: if 2 < 0 jump L can be deleted• When might constant folding be dangerous?

Page 28: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 28

Flow of Control Optimizations

• Eliminating unreachable code:– Code that is unreachable in the control-flow graph

– Basic blocks that are not the target of any jump or “fall through” from a conditional

– Such basic blocks can be eliminated

• Why would such basic blocks occur?• Removing unreachable code makes the program smaller– And sometimes also faster, due to memory cache effects (increased spatial locality)

Page 29: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 29

Single Assignment Form

• Some optimizations are simplified if each assignment is to a temporary that has not appeared already in the basic block

• Intermediate code can be rewritten to be in single assignment formx := a + y x := a + ya := x a1 := x

x := a * x x1 := a1 * x

b := x + a b := x1 + a1

(x1 and a1 are fresh temporaries)

Page 30: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 30

Common Subexpression Elimination

• Assume– Basic block is in single assignment form

• All assignments with same rhs compute the same value

• Example:x := y + z x := y + z

… …w := y + z w := x

• Why is single assignment important here?

Page 31: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 31

Copy Propagation

• If w := x appears in a block, all subsequent uses of w can be replaced with uses of x

• Example: b := z + y b := z + y a := b a := b x := 2 * a x := 2 * b

• This does not make the program smaller or faster but might enable other optimizations– Constant folding– Dead code elimination

• Again, single assignment is important here.

Page 32: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 32

Copy Propagation and Constant Folding

• Example:a := 5 a := 5

x := 2 * a x := 10y := x + 6 y := 16

t := x * y t := x << 4

Page 33: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 33

Dead Code Elimination

If w := rhs appears in a basic blockw does not appear anywhere else in the program

Then the statement w := rhs is dead and can be eliminated

– Dead = does not contribute to the program’s result

Example: (a is not used anywhere else)x := z + y b := z + y b := z + y

a := x a := b x := 2 * b

x := 2 * a x := 2 * b

Page 34: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 34

Applying Local Optimizations

• Each local optimization does very little by itself

• Typically optimizations interact– Performing one optimizations enables other opt.

• Typical optimizing compilers repeatedly perform optimizations until no improvement is possible– The optimizer can also be stopped at any time to limit the compilation time

Page 35: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 35

An Example

• Initial code: a := x ** 2 b := 3 c := x d := c * c e := b * 2 f := a + d g := e * f

Page 36: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 36

An Example

• Algebraic optimization: a := x ** 2 b := 3 c := x d := c * c e := b * 2 f := a + d g := e * f

Page 37: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 37

An Example

• Algebraic optimization: a := x * x b := 3 c := x d := c * c e := b + b f := a + d g := e * f

Page 38: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 38

An Example

• Copy propagation: a := x * x b := 3 c := x d := c * c e := b + b f := a + d g := e * f

Page 39: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 39

An Example

• Copy propagation: a := x * x b := 3 c := x d := x * x e := 3 + 3 f := a + d g := e * f

Page 40: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 40

An Example

• Constant folding: a := x * x b := 3 c := x d := x * x e := 3 + 3 f := a + d g := e * f

Page 41: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 41

An Example

• Constant folding: a := x * x b := 3 c := x d := x * x e := 6 f := a + d g := e * f

Page 42: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 42

An Example

• Common subexpression elimination: a := x * x b := 3 c := x d := x * x e := 6 f := a + d g := e * f

Page 43: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 43

An Example

• Common subexpression elimination: a := x * x b := 3 c := x d := a e := 6 f := a + d g := e * f

Page 44: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 44

An Example

• Copy propagation: a := x * x b := 3 c := x d := a e := 6 f := a + d g := e * f

Page 45: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 45

An Example

• Copy propagation: a := x * x b := 3 c := x d := a e := 6 f := a + a g := 6 * f

Page 46: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 46

An Example

• Dead code elimination: a := x * x b := 3 c := x d := a e := 6 f := a + a g := 6 * f

Page 47: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 47

An Example

• Dead code elimination: a := x * x

f := a + a g := 6 * f

• This is the final form

Page 48: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 48

Peephole Optimizations on Assembly Code

• The optimizations presented before work on intermediate code– They are target independent– But they can be applied on assembly language also

• Peephole optimization is an effective technique for improving assembly code– The “peephole” is a short sequence of (usually contiguous) instructions

– The optimizer replaces the sequence with another equivalent (but faster) one

Page 49: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 49

Peephole Optimizations (Cont.)

• Write peephole optimizations as replacement rules i1, …, in j1, …, jm

where the rhs is the improved version of the lhs

• Examples: move $a $b, move $b $a move $a $b– Works if move $b $a is not the target of a jump

addiu $a $b k, lw $c ($a) lw $c k($b)- Works if $a not used later (is “dead”)

Page 50: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 50

Peephole Optimizations (Cont.)

• Many (but not all) of the basic block optimizations can be cast as peephole optimizations– Example: addiu $a $b 0 move $a $b– Example: move $a $a – These two together eliminate addiu $a $a 0

• Just like for local optimizations, peephole optimizations need to be applied repeatedly to get maximum effect

Page 51: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 51

Local Optimizations. Notes.

• Intermediate code is helpful for many optimizations

• Many simple optimizations can still be applied on assembly language

• “Program optimization” is grossly misnamed– Code produced by “optimizers” is not optimal in any reasonable sense

– “Program improvement” is a more appropriate term

Page 52: 4/18/08Prof. Hilfinger CS 164 Lecture 341 Other Forms of Intermediate Code. Local Optimizations Lecture 34 (Adapted from notes by R. Bodik and G. Necula)

4/18/08 Prof. Hilfinger CS 164 Lecture 34 52

Local Optimizations. Notes (II).

• Serious problem: what to do with pointers?– *t may change even if local variable t does not: Aliasing

– Arrays are a special case (address calculation)

• What to do about globals?• What to do about calls?

– Not exactly jumps, because they (almost) always return.

– Can modify variables used by caller

• Next: global optimizations


Recommended