+ All Categories
Home > Documents > Operating System

Operating System

Date post: 09-Feb-2016
Category:
Upload: anoki
View: 20 times
Download: 0 times
Share this document with a friend
Description:
Operating System. Exploits hardware resources one or more processors main memory, disk and other I/O devices Provides a set of services to system users program development, program execution, access to I/O devices, controlled access to files and other resources etc. - PowerPoint PPT Presentation
Popular Tags:
54
Operating System • Exploits hardware resources – one or more processors – main memory, disk and other I/O devices • Provides a set of services to system users – program development, program execution, access to I/O devices, controlled access to files and other resources etc.
Transcript
Page 1: Operating System

Operating System

• Exploits hardware resources – one or more processors– main memory, disk and other I/O devices

• Provides a set of services to system users– program development, program execution,

access to I/O devices, controlled access to files and other resources etc.

Page 2: Operating System

Chapter 1Computer System Overview

Operating Systems:Internals and Design Principles, 6/E

William Stallings

2

Page 3: Operating System

Given Credits

• Most of the lecture notes are based on the slides from the Textbook’s companion website: http://williamstallings.com/OS/OS6e.html

• Some of the slides are from Dr. David Tarnoff in East Tennessee State University

• I have modified them and added new slides

Page 4: Operating System

Computer Components: Top-Level View

Page 5: Operating System

Processor Registers

• User-visible registers– Enable programmer to minimize main memory

references by optimizing register use• Control and status registers

– Used by processor to control operation of the processor

– Used by privileged OS routines to control the execution of programs

Page 6: Operating System

Control and Status Registers

• Program counter (PC)– Contains the address of an instruction to be fetched

• Instruction register (IR)– Contains the instruction most recently fetched

• Program status word (PSW)– Condition codes– Interrupt enable/disable– Kernel/user mode

Page 7: Operating System

Control and Status Registers

• Condition codes or flags– Bits set by processor hardware as a result of

operations– Can be accessed by a program but not altered– Example

• Condition code bit set following the execution of arithmetic instruction: positive, negative, zero, or overflow

Page 8: Operating System

Instruction Execution

• Two steps– Processor reads (fetches) instructions from

memory– Processor executes each instruction

Page 9: Operating System

Basic Instruction Cycle

Page 10: Operating System

Instruction Fetch and Execute

• The processor fetches the instruction from memory

• Program counter (PC) holds address of the instruction to be fetched next

• PC is incremented after each fetch

Page 11: Operating System

Instruction Register

• Fetched instruction loaded into instruction register

• An instruction contains bits that specify the action the processor is to take

• Categories of actions: – Processor-memory, processor-I/O, data

processing, control

Page 12: Operating System

Characteristics of a Hypothetical Machine

Page 13: Operating System

Example of Program Execution

Page 14: Operating System

Interrupts

• Interrupt the normal sequencing of the processor

• Why do we need interrupts

Page 15: Operating System

Classes of Interrupts

Page 16: Operating System

Interrupts• Most I/O devices are slower than the

processor– Without interrupts, processor has to pause to wait

for device

Page 17: Operating System

Program Flow of Control

Page 18: Operating System

Program Flow of Control

Page 19: Operating System

Interrupt Stage

• Processor checks for interrupts• If interrupt

– Suspend execution of program– Execute interrupt-handler routine

Page 20: Operating System

Transfer of Control via Interrupts

Page 21: Operating System

Instruction Cycle with Interrupts

Page 22: Operating System

Simple Interrupt Processing

Page 23: Operating System

Changes in Memory and Registers for an Interrupt

Page 24: Operating System

Changes in Memory and Registers for an Interrupt

Page 25: Operating System

Multiple Interrupts

• What to do if another interrupt happens when we are handling one interrupt?

Page 26: Operating System

Sequential Interrupt Processing

Page 27: Operating System

Nested Interrupt Processing

Page 28: Operating System

Multiprogramming

• Processor has more than one program to execute

• The sequence the programs are executed depend on their relative priority and whether they are waiting for I/O

• After an interrupt handler completes, control may not return to the program that was executing at the time of the interrupt

Page 29: Operating System

Input/Output Techniques

• Programmed I/O • Interrupt driven – I/O• Direct Memory Access (DMA)

• What are they & the ranking of their efficiencies

Page 30: Operating System

Input/Output Techniques

• Programmed I/O – poll and response• Interrupt driven – I/O module calls for CPU

when needed• Direct Memory Access (DMA) – module

has direct access to specified block of memory

Page 31: Operating System

I/O Module Structure

Page 32: Operating System

Programmed I/O – CPU has direct control over I/O

• Processor requests operation with commands sent to I/O module– Control – telling a peripheral what to do– Test – used to check condition of I/O module or device– Read – obtains data from peripheral so processor can read

it from the data bus– Write – sends data using the data bus to the peripheral

• I/O module performs operation• When completed, I/O module updates its status

registers• Sensing status – involves polling the I/O module's

status registers

Page 33: Operating System

Programmed I/O (continued)

• I/O module does not inform CPU directly• CPU may wait or do something and come back

later• Wastes CPU time because

– CPU acts as a bridge for moving data between I/O module and main memory, i.e., every piece of data goes through CPU

– CPU waits for I/O module to complete operation

Page 34: Operating System

Interrupt Driven I/O

• Overcomes CPU waiting• Requires interrupt service routine• No repeated CPU checking of device• I/O module interrupts when ready• Still requires CPU to go between for

moving data between I/O module and main memory

Page 35: Operating System

Interrupt-Driven I/O

• Consumes a lot of processor time because every word read or written passes through the processor

Page 36: Operating System

Direct Memory Access (DMA)

• Impetus behind DMA – Interrupt driven and programmed I/O require active CPU intervention (all data must pass through CPU)

• Transfer rate is limited by processor's ability to service the device

• CPU is tied up managing I/O transfer

Page 37: Operating System

DMA (continued)

• Additional Module (hardware) on bus• DMA controller takes over bus from CPU

for I/O– Waiting for a time when the processor doesn't

need bus– Cycle stealing – seizing bus from CPU (more

common)

Page 38: Operating System

DMA Operation

• CPU tells DMA controller:– whether it will be a read or write operation– the address of device to transfer data from or to– the starting address of memory block for the

data transfer– the amount of data to be transferred

• DMA performs transfer while CPU does other processing

• DMA sends interrupt when completes

Page 39: Operating System

Cycle Stealing

• DMA controller takes over bus for a cycle• Transfer of one word of data• Not an interrupt to CPU operations• CPU suspended just before it accesses

bus – i.e. before an operand or data fetch or a data write

• Slows down CPU but not as much as CPU doing transfer

Page 40: Operating System

Direct Memory Access

• Transfers a block of data directly to or from memory

• An interrupt is sent when the transfer is complete

• Most efficient

Page 41: Operating System

The Memory Hierarchy

Page 42: Operating System

Going Down the Hierarchy

• Decreasing cost per bit• Increasing capacity• Increasing access time• Decreasing frequency of access to the

memory by the processor

Page 43: Operating System

Cache Memory

• Processor speed faster than memory access speed

• Exploit the principle of locality with a small fast memory

Page 44: Operating System

Cache and Main Memory

Page 45: Operating System

Cache Principles

• Contains copy of a portion of main memory

• Processor first checks cache• If not found, block of memory read into

cache• Because of locality of reference, likely

future memory references are in that block

Page 46: Operating System

Cache/Main-Memory Structure

Page 47: Operating System

Cache Read Operation

Page 48: Operating System

Cache Principles

• Cache size– Small caches have significant impact on

performance• Block size

– The unit of data exchanged between cache and main memory

– Larger block size more hits until probability of using newly fetched data becomes less than the probability of reusing data that have to be moved out of cache

Page 49: Operating System

Cache Principles

• Mapping function– Determines which cache location the block

will occupy– Direct Mapped Cache, Fully Associative

Cache, N-Way Set Associative Cache• Replacement algorithm

– Chooses which block to replace– Least-recently-used (LRU) algorithm

Page 50: Operating System

Cache Principles

• Write policy– Dictates when the memory write operation

takes place– Can occur every time the block is updated– Can occur when the block is replaced

• Minimize write operations• Leave main memory in an obsolete state

Page 51: Operating System

• 2. [25 pts] This problem concerns the performance of the cache memory in web applications that play media files. Consider a video streaming workload that accesses working sets of size 256KB sequentially with the following byte-address stream:

• 0, 2, 4, 6, 8, 10, …• Suppose the computer that processes the above stream

has a 32 KB direct-mapped L1 cache. The cache block size is 32 bytes.

• a) What would be the cache miss rate of the address stream above? Show all calculations.

• Every 16th access would be a miss, hence, miss ratio is 1/16 = 6.25%

•  

Page 52: Operating System

• b) If the cache size were changed to 64 KB, what would be the change in the miss rate? Justify your answer.

• There would be no change; the miss rate depends only on the block size.

• c) If the cache organization were changed to two-way set associative, without changing the block size, would the cache miss rate change? Justify your answer.

• There would be no change; the data is fetched in units of blocks from the memory, therefore the miss rate will be the same.

 

Page 53: Operating System

• d) If the cache block size is changed to 16B would the miss rate change? If so, what would be the new value?

• Now, every 8th access would be a miss, hence the rate is 1/8 = 12.5%

Page 54: Operating System

• e) Prefetching is a technique that can be used effectively in streaming applications, such as the one described above. Describe how prefetching works and how it impacts the cache miss rate.

• In prefetching, cache lines are brought in speculatively, in anticipation of future accesses. This works very well in streaming applications, where memory accesses are in sequential order, and the preloading for a new block can be overlapped with the consumption of the current block. This effectively reduces the miss rate to zero.


Recommended