Jiang Li, Ph.D.Department of Computer Science
Fundamentals of QuantitativeDesign and Analysis
Dr. Jiang Li
Adapted from the slides provided by the authors
Jiang Li, Ph.D.Department of Computer Science
Computer Technology
• Performance improvements:– Improvements in semiconductor technology
• Feature size, clock speed
– Improvements in computer architectures• RISC architectures exploiting ILP and using cache
• Led to/utilized by HLL compilers, UNIX
• Without it, computers would be 7.5 times slower
– Together have enabled:• Lightweight affordable computers
• Productivity-based managed/interpreted programming languages, SaaS, new nature of applications
Intro
du
ctio
n
Jiang Li, Ph.D.Department of Computer Science
Single Processor PerformanceIn
trod
uctio
n
RISC
Move to multi-processor
Jiang Li, Ph.D.Department of Computer Science
Current Trends in Architecture
• Cannot continue to leverage Instruction-Level parallelism (ILP)– Single processor performance improvement ended in 2003
• New models for performance:– Data-level parallelism (DLP)
– Thread-level parallelism (TLP)
– Request-level parallelism (RLP)
• These require explicit restructuring of the application
Intro
du
ctio
n
Jiang Li, Ph.D.Department of Computer Science
What to Get from this Course
• The architectural ideas and accompanying compiler improvements that made the incredible growth rate possible
• A quantitative approach to computer design and analysis
– Empirical observations of programs, experimentation, and simulation as tools
Intro
du
ctio
n
Jiang Li, Ph.D.Department of Computer Science
Classes of Computers
• Personal Mobile Device (PMD)– e.g. smart phones, tablet computers
– Emphasis on energy efficiency and real-time, system cost (e.g. memory)
• Desktop Computing– Emphasis on price-performance
• Servers– Emphasis on availability, scalability, throughput
• Clusters / Warehouse Scale Computers– Used for “Software as a Service (SaaS)”
– Emphasis on availability and price-performance
– Sub-class: Supercomputers, emphasis: floating-point performance and fast internal networks
• Embedded Computers– Emphasis: price
Cla
sse
s o
f Co
mp
ute
rs
Jiang Li, Ph.D.Department of Computer Science
Parallelism
• Classes of parallelism in applications:– Data-Level Parallelism (DLP)
– Task-Level Parallelism (TLP)
• Classes of architectural parallelism:– Instruction-Level Parallelism (ILP)
• Exploits DLP through pipelining, speculative execution
– Vector architectures/Graphic Processor Units (GPUs)• Exploits DLP through SIMD
– Thread-Level Parallelism• Exploits DLP/TLP
– Request-Level Parallelism• Exploits parallelism specified by code/OS
Cla
sse
s o
f Co
mp
ute
rs
Jiang Li, Ph.D.Department of Computer Science
Flynn’s Taxonomy
• Single instruction stream, single data stream (SISD)
• Single instruction stream, multiple data streams (SIMD)– Vector architectures
– Multimedia extensions
– Graphics processor units
• Multiple instruction streams, single data stream (MISD)– No commercial implementation
• Multiple instruction streams, multiple data streams (MIMD)– Tightly-coupled MIMD: e.g. multi-core
– Loosely-coupled MIMD: e.g. WSC
Cla
sse
s o
f Co
mp
ute
rs
Jiang Li, Ph.D.Department of Computer Science
Defining Computer Architecture
• “Old” view of computer architecture:
– Instruction Set Architecture (ISA) design
– i.e. decisions regarding:• registers, memory addressing, addressing modes, instruction
operands, available operations, control flow instructions, instruction encoding
• “Real” computer architecture:
– Specific requirements of the target machine
– Design to maximize performance within constraints: cost, power, and availability
– Includes ISA, microarchitecture, hardware
De
finin
g C
om
pu
ter A
rch
itectu
re
Jiang Li, Ph.D.Department of Computer Science
A Quick View of An ISA (1)
• Classes
– General-purpose register architecture• Register-memory: 80x86
• Load-store: ARM, MIPS
• Memory addressing
– Byte addressing• Aligned (usually faster) vs. non-aligned
• Address modes
– Specify the address of a memory object
• Operations
– Data transfer, arithmetic logical, control, FP
De
finin
g C
om
pu
ter A
rch
itectu
re
Jiang Li, Ph.D.Department of Computer Science
A Quick View of An ISA (2)
• Types and sizes of operands
– Integer, floating point (FP), characters
– 8/16/32/64 bits etc
• Control flow instructions
– Branch,jump,call,return
– PC-relative addressing
– Test register content vs. test condition bits
– Return address in register vs. in memory
• Encoding
– Fixed length vs. variable length
De
finin
g C
om
pu
ter A
rch
itectu
re
Jiang Li, Ph.D.Department of Computer Science
Genuine Computer Architecture
• ISA
• Organization/microarchitecture
– High-level aspects of computer design, e.g. the memory system, the memory interconnect, and the design of CPU
– AMD Opteron vs. Intel Core i7
• Hardware
– Logic design, packaging
– Core i7 vs. Xeon 7560
12
Jiang Li, Ph.D.Department of Computer Science
Trends in Technology
• Integrated circuit technology– Transistor density: 35%/year
– Die size: 10-20%/year
– Integration overall: 40-55%/year
• DRAM capacity: 25-40%/year (slowing)
• Flash capacity: 50-60%/year– 15-20X cheaper/bit than DRAM
• Magnetic disk technology: 40%/year– 15-25X cheaper/bit than Flash
– 300-500X cheaper/bit than DRAM
Tre
nd
s in
Te
ch
no
log
y
1. Computer lifetime:
3-5 years
2. Design for the
next technology
Jiang Li, Ph.D.Department of Computer Science
Bandwidth and Latency
• Bandwidth or throughput– Total work done in a given time
– 10,000-25,000X improvement for processors
– 300-1200X improvement for memory and disks
• Latency or response time– Time between start and completion of an event
– 30-80X improvement for processors
– 6-8X improvement for memory and disks
Tre
nd
s in
Te
ch
no
log
y
Jiang Li, Ph.D.Department of Computer Science
Bandwidth and Latency
Log-log plot of bandwidth and latency milestones
Tre
nd
s in
Te
ch
no
log
y
Jiang Li, Ph.D.Department of Computer Science
Transistors and Wires
• Feature size– Minimum size of transistor or wire in x or y
dimension
– 10 microns in 1971 to .032 microns (32nm) in 2011• 5nm expected in 2020-2021
– Transistor performance scales linearly• Wire delay does not improve with feature size!
– A design challenge along with power
– Integration density scales quadratically
Tre
nd
s in
Te
ch
no
log
y
Jiang Li, Ph.D.Department of Computer Science
Power and Energy
• Problem: Get power in, get power (heat) out
• Design concerns:– Maximum power required
– Sustained power consumption• Thermal Design Power (TDP)
– Characterizes sustained power consumption
– Used as target for power supply and cooling system
– Lower than peak power, higher than average power consumption
– Power supply and cooling must match TDP
• Clock rate can be reduced dynamically to limit power consumption– Chip can be shutdown too!
– Energy and energy efficiency• Energy per task is often a better measurement
Tre
nd
s in
Po
we
r an
d E
ne
rgy
Jiang Li, Ph.D.Department of Computer Science
Dynamic Energy and Power
• Dynamic energy– Transistor switch from 0 -> 1 or 1 -> 0
– ½ × Capacitive load × Voltage2
• Dynamic power– ½ × Capacitive load × Voltage2 × Frequency switched
– Frequency up 15%, voltage down 15%, power?
• For a fixed task, reducing clock rate reduces power, not energy
Tre
nd
s in
Po
we
r an
d E
ne
rgy
Jiang Li, Ph.D.Department of Computer Science
Power
• Intel 80386 consumed ~ 2 W
• 3.3 GHz Intel Core i7 consumes 130 W
• Heat must be dissipated from 1.5 x 1.5 cm chip
• This is the limit of what can be cooled by air
Tre
nd
s in
Po
we
r an
d E
ne
rgy
Jiang Li, Ph.D.Department of Computer Science
Reducing Dynamic Power
• Techniques for reducing power:– Do nothing well
• Turn off the clock of inactive modules
– Dynamic Voltage-Frequency Scaling
– Low power state for DRAM, disks• Have to return to fully active mode to read or write
– Overclocking on some cores, turning off others
Tre
nd
s in
Po
we
r an
d E
ne
rgy
Jiang Li, Ph.D.Department of Computer Science
Static Power
• Static power consumption– Currentstatic x Voltage
– Increases with smaller transistor sizes
– Scales with number of transistors• Up to 50% of the total power consumption
• Power gating– Turn off the power supply to inactive modules
• Race-to-halt– Use a faster, less energy-efficient processor to allow the
rest of the system to go into a sleep mode
Tre
nd
s in
Po
we
r an
d E
ne
rgy
Jiang Li, Ph.D.Department of Computer Science
Trends in Cost• Factors
– Learning curve
– Volume• Yield: % of products passing tests
• Rule of thumb: 10% less for each doubling of volume
– Standardization• DRAM (more standardized) vs processors (less std’ed)
• Commodity
– Competition• More for commodity
Tre
nd
s in
Co
st
Jiang Li, Ph.D.Department of Computer Science
Integrated Circuit Cost
• Integrated circuit
• Bose-Einstein formula:
– Defects per unit area = 0.016-0.057 defects per square cm (2010)
– N = process-complexity factor = 11.5-15.5 (40 nm, 2010)
Tre
nd
s in
Co
st
Jiang Li, Ph.D.Department of Computer Science
Integrated Circuit CostT
ren
ds in
Co
st
Figure 1.15 This 300 mm wafer contains 280 full Sandy Bridge dies, each 20.7 by 10.5 mm in a 32 nm process. (Sandy
Bridge is Intel’s successor to Nehalem used in the Core i7.) At 216 mm2, the formula for dies per wafer estimates 282. (Courtesy
Intel.)
Jiang Li, Ph.D.Department of Computer Science
Integrated Circuit Cost
• Redundancy to raise yield
• Cost per die grows roughly as the square of the die area– What functions should be included on a die?
– Considered by computer designer
• Incorporate reconfigurable logic for better flexibility
• Cost versus Price
• Cost of Manufacturing versus Cost of Operation
Tre
nd
s in
Co
st
Jiang Li, Ph.D.Department of Computer Science
Dependability• Two states of service
1. Service accomplishment
2. Service interruption
• Service state transition– 1->2: failures
– 2->1: restorations
• Module reliability– Mean time to failure (MTTF)
• Failures in time (FIT): failures per billion hours of operation
• Failure rate: 109/MTTF FIT
• Failure rate of a collection of modules?– With exponentially distributed lifetimes, the failure rate is the sum.
– Mean time to repair (MTTR)
– Mean time between failures (MTBF) = MTTF + MTTR
• Module availability = MTTF / MTBF
Dep
en
da
bility
Quantifiable!
Jiang Li, Ph.D.Department of Computer Science
Dependability Example (1)
• Assume a disk subsystem with the following components and MTTF– 10 disks, each rated at 1,000,000-hour MTTF
– 1 ATA controller, 500,000-hour MTTF
– 1 power supply, 200,000-hour MTTF
– 1 fan, 200,000-hour MTTF
– 1 ATA cable, 1,000,000-hour MTTF
• The lifetimes are exponentially distributed
• Failures are independent
• MTTF of the system?
27
Dep
en
da
bility
Jiang Li, Ph.D.Department of Computer Science
Dependability Example (2)
• Consider the power supply of the previous subsystem– 1 power supply, 200,000-hour MTTF
• Add a same power supply as backup
• Calculate the reliability of redundant power supplies
• Probability of total failure
= Probability of one power supply failure
* Probability of the other failure before replacement
28
Dep
en
da
bility
Jiang Li, Ph.D.Department of Computer Science
Measuring Performance• Typical performance metrics:
– Response time
– Throughput
• Speedup of X relative to Y– Execution timeY / Execution timeX
• Execution time– Wall clock time: includes all system overheads
– CPU time: only computation time
Me
asu
ring
Pe
rform
an
ce
Jiang Li, Ph.D.Department of Computer Science
Measuring Performance
• Benchmarks– Kernels (e.g. matrix multiply)
• Small, key pieces of real applications
– Toy programs (e.g. sorting)
– Synthetic benchmarks (e.g. Dhrystone)• Fake programs trying to match the profile and behavior of real applications
• Potential problems– How well benchmarks resemble true applications?
– Running environment of benchmarks• Hardware, compiler
• Benchmark suites (e.g. SPEC06fp, TPC-C)
30
Me
asu
ring
Pe
rform
an
ce
Jiang Li, Ph.D.Department of Computer Science
Reporting Performance Results
• Reproducible
• Summarizing
– Normalize execution times (e.g. SPECRatio)• Divide the time on the reference computer by the time on the
computer being rated
– Summarize SPECRatios
• Geometric mean = 𝑛 ς𝑖=1𝑛 𝑆𝑃𝐸𝐶𝑅𝑎𝑡𝑖𝑜𝑖
31
Me
asu
ring
Pe
rform
an
ce
Jiang Li, Ph.D.Department of Computer Science
Principles of Computer Design
• Take Advantage of Parallelism– e.g. multiple processors, disks, memory banks, pipelining, multiple
functional units
• Principle of Locality– Reuse of data and instructions
• Focus on the Common Case– Amdahl’s Law
• The law of diminishing returns
Prin
cip
les
Jiang Li, Ph.D.Department of Computer Science
Principles of Computer Design
• The Processor Performance Equation
Prin
cip
les
Jiang Li, Ph.D.Department of Computer Science
Principles of Computer DesignP
rincip
les
• Different instruction types having different CPIs
Jiang Li, Ph.D.Department of Computer Science
CPU Time Example
• Suppose we have made the following measurements:– Frequency of FP operations = 25%
– Average CPI of FP operations = 4.0
– Average CPI of other instructions = 1.33
– Frequency of FPSQR = 2%
– CPI of FPSQR = 20
• Two design alternatives– Decrease the CPI of FPSQR to 2, or
– Decrease the average CPI of all FP operations to 2.5
• Compare these two alternatives
35
Jiang Li, Ph.D.Department of Computer Science
Fallacy of MTTF
• The rated mean time to failure of disks is 1,200,000 hours or almost 140 years, so disks practically never fail.
• To calculate the large MTTF– Manufacturers will put thousands of disks in a room, run them for a
few months, and count the number that fail.
– MTTF = total # hours all disks work / # disk failed
• Users are assumed to replace disk every 5 years, failure in 27 replacements
• Usage of MTTF
# failed disks = # disks × (short) Time period / MTTF
• Real-world MTTF is about 2 to 10 times worse than the manufacturer’s MTTF
36
Jiang Li, Ph.D.Department of Computer Science
Summary
• Introduced a number of concepts– Classes of computers
– Parallelism
– Computer architecture
– Bandwidth and latency
– Power and energy
– Cost
– Dependability
– Performance measurement
– Principles of computer design
• Provided a quantitative framework that we will expand upon throughout the book.
37