+ All Categories
Home > Documents > Lecture 5

Lecture 5

Date post: 04-Jan-2016
Category:
Upload: oscar-lambert
View: 36 times
Download: 0 times
Share this document with a friend
Description:
Lecture 5. Memory Management Part I. Lecture Highlights. Introduction to Memory Management What is memory management Related Problems of Redundancy, Fragmentation and Synchronization Memory Placement Algorithms Continuous Memory Allocation Scheme Parameters Involved - PowerPoint PPT Presentation
Popular Tags:
62
Lecture 5 Memory Management Part I
Transcript
Page 1: Lecture 5

Lecture 5

Memory ManagementPart I

Page 2: Lecture 5

Lecture Highlights Introduction to Memory Management

What is memory management Related Problems of Redundancy,

Fragmentation and Synchronization Memory Placement Algorithms Continuous Memory Allocation Scheme Parameters Involved Parameter-Performance Relationships Some Sample Results

Page 3: Lecture 5

IntroductionWhat is memory management

Memory management primarily deals with space multiplexing.

All the processes need to be scheduled in such a way that all the users get the illusion that their processes reside on the RAM.

The job of the memory manager: keep track of which parts of memory are in use and which parts are not

in use to allocate memory to processes when they need it and deallocate it

when they are done to manage swapping between main memory and disk when main

memory is not big enough to hold all the processes.

Page 4: Lecture 5

What is memory management Visual Representation

Operating system

User space

Process p1

Process p2

P1 -Swap out

P2 - Swap in

Main Memory

Hard disc

Page 5: Lecture 5

Memory ManagementAn Example This example illustrates the basic concept

of memory management. We consider a mickey mouse system where: Memory Size: 16MB Transfer Rate: 2MB/ms RR Time Quantum: 2ms

We’ll use the process mix on the next slide and follow the RAM configuration before and after each time slot as also the action taking place during the time slot for five time slots.

Page 6: Lecture 5

Memory ManagementAn Example – The Process Mix

Process ID

Execution Time (ms)

Size (in MB)

Transfer time needed (ms)

P1 4 2 1

P2 2 6 3

P3 6 4 2

P4 8 4 2

P5 2 2 1

P6 10 4 2

P7 2 2 1

Page 7: Lecture 5

Memory ManagementAn Example – Time Slot 1

RAM Configuration RAM Configuration

Before: After:Time Slot 1P1 (4ms)

P2 (2ms)

P3 (6ms)

P4 (8ms)

• P1 Executes

P1 (2ms)

P2 (2ms)

P3 (6ms)

P4 (8ms)

Page 8: Lecture 5

Memory ManagementAn Example – Time Slot 2

RAM Configuration RAM Configuration

Before: After:Time Slot 2P1 (2ms)

P2 (2ms)

P3 (6ms)

P4 (8ms)

• P1 spooled in in 1ms

• P5 spooled in in 1ms

• P2 Executes

• P2 Done

P5 (2ms)

P2 (0ms)

P3 (6ms)

P4 (8ms)

Page 9: Lecture 5

Memory ManagementAn Example – Time Slot 3

RAM Configuration RAM Configuration

Before: After:Time Slot 3P5 (2ms)

P2 (0ms)

P3 (6ms)

P4 (8ms)

• P2 spooled out in 2ms

• P3 Executes

P5 (2ms)

P2 (0ms)

P3 (4ms)

P4 (8ms)

Page 10: Lecture 5

Memory ManagementAn Example – Time Slot 4

RAM Configuration RAM Configuration

Before: After:Time Slot 4P5 (2ms)

P2 (0ms)

P3 (4ms)

P4 (8ms)

P5 (2ms)

P3 (4ms)

P4 (6ms)

P6 (10ms)

2MB Hole

• P2 spooled out in 1ms

• P6 spooled in in 1ms

• P4 Executes

Page 11: Lecture 5

Memory ManagementAn Example – Time Slot 5

RAM Configuration RAM Configuration

Before: After:Time Slot 5P5 (2ms)

P3 (4ms)

P4 (6ms)

2MB Hole

P5 (0ms)

P3 (4ms)

P4 (6ms)

P7 (2ms)

P6 (10ms)

P6 (10ms)

• P6 spooled in in 1ms

• P7 spooled in in 1ms

• P5 Executes

• P5 Done

Page 12: Lecture 5

Memory ManagementAn Example

The previous slides gave started a stepwise walk-through of the mickey mouse system.

Try and complete the walk through from this point on.

Page 13: Lecture 5

Related ProblemsSynchronization problem in spooling

Spooling enables the transfer of process while another process is in execution. It aims at preventing the CPU from being idle, thus, managing CPU utilization more efficiently.

The processes that are being transferred to the main memory can be of different sizes. When trying to transfer a very big process, it is possible that the transfer time exceeds the combined execution time of the processes in the RAM. This results in the CPU being idle which was the problem for which spooling was invented.

The above problem is termed as the synchronization problem. The reason behind it is that the variance in process sizes does not guarantee synchronization.

Page 14: Lecture 5

Related ProblemsRedundancy Problem Usually the combined size of all

processes is much bigger than the RAM size and for this reason processes are swapped in and out continuously.

One issue regarding this is: What is the use of transferring the entire process when only part of the code is executed in a given time slot?

This problem is termed as the Redundancy problem.

Page 15: Lecture 5

Related ProblemsFragmentation

Fragmentation is encountered when the free memory space is broken into little pieces as processes are loaded and removed from memory.

Fragmentation is of two types: External fragmentation Internal fragmentation

In the present context, we are concerned with external fragmentation and shall explore the same in greater details in the following slides.

Page 16: Lecture 5

Generation of Holes In A System An Example

400K

1000K

2000K

2300K

2560K

400K

1000KP2

terminates

2000K

2300K

2560K

OS

P1

P2

P3

OS

P1

P3

400K

1000K

allocate P41700K2000K

2300K

2560K

OS

P1

P4

P3

Figure: P5 of size 500K cannot be allocated in part (c)

a b c

Page 17: Lecture 5

Generation of Holes In A System An Example

In the previous visual presentation, we see that initially P1, P2, P3 are in the RAM and the remaining 260K is not enough for P4 (700K). (part a)

When P2 terminates, it is spooled out leaving behind a hole of size 1000K. So now we have two holes of sizes 1000K and 260K respectively. (part b)

At this point, we have a hole big enough to spool in P4 which leaves us with two holes of sizes 300K and 260K. (part c)

Thus, we see holes are generated because the size of the spooled out process is not that same as the size of the process waiting to be spooled in.

Page 18: Lecture 5

Related ProblemsFragmentation – External Fragmentation

External fragmentation exists when enough total memory space exists to satisfy a request, but it is not contiguous; storage is fragmented into a large number of small holes.

Referring to the figure of the scheduling example on the next slide, two such cases can be observed.

Page 19: Lecture 5

Related ProblemsFragmentation – External Fragmentation

400K

1000K

2000K

2300K

2560K

400K

1000KP2

terminates

2000K

2300K

2560K

OS

P1

P2

P3

OS

P1

P3

400K

1000K

allocate P41700K2000K

2300K

2560K

OS

P1

P4

P3

Figure: P5 of size 500K cannot be allocated due to external fragmentation

a b c

Page 20: Lecture 5

Related ProblemsFragmentation – External Fragmentation

From the figure on the last slide, we see In part (a), there is a total external fragmentation of

260K, a space that is too small to satisfy the requests of either of the two remaining processes, P4 and P5.

In part (c), however, there is a total external fragmentation of 560K. This space would be large enough to run process P5, except that this free memory is not contiguous. It is fragmented into two pieces, neither one of which is large enough, by itself, to satisfy the memory request of process P5.

Page 21: Lecture 5

Related ProblemsFragmentation – External Fragmentation

This fragmentation problem can be severe. In the worst case, there could be a block of free (wasted) memory between every two processes. If all this memory were in one big free block, a few more processes could be run. Depending on the total amount of memory storage and the average process size, external fragmentation may be either a minor or major problem.

Page 22: Lecture 5

Related ProblemsFragmentation – External Fragmentation

One solution to the problem of external fragmentation is compaction.

The goal is to shuffle the memory contents to place all free memory together in one large block.

The simplest compaction algorithm is to move all processes toward one end of the memory; all holes in the other direction, producing one large hole of available memory.

This scheme can be quite expensive. The figure on the following slide shows different ways to

compact memory. Selecting an optimal compaction strategy is quite difficult.

Page 23: Lecture 5

Related ProblemsFragmentation – External Fragmentation

300K500K600K

1000K

1200K

1500K

1900K2100K

300K 500K 600K 800K

1200K

2100K

300K 500K 600K

1000K

1200K

2100K

300K 500K 600K

1500K

1900K

2100K

OS OS OS OS

P1

P2400K

P3

300K

P4

200K

P1

P2P3

P4

900K

P1

P2P4

P3

900K

P1

P2900K

P4

P3

Original allocation

Moved 600K

Moved 400K

Moved 200K

Different Ways To Compact Memory

Page 24: Lecture 5

Related ProblemsFragmentation – External Fragmentation

As mentioned earlier, compaction is an expensive scheme. The following example gives a more concrete idea of the same.

Given the following: RAM size = 128 MB Access speed of 1byte of RAM = 10ns

Each byte will need to be accessed twice during compaction. Thus,

Compaction time = 2 x 10 x 10-9 x 128 x 106

= 2560 x 10-3 s = 2560ms 3s Supposing we are using RR scheduling with time

quantum of 2ms, the compaction time is equivalent to 1280 time slots.

Page 25: Lecture 5

Related ProblemsFragmentation – External Fragmentation

Compaction is usually defined by the following two thresholds: Memory hole size threshold: If the sizes of all the holes

are at most a predefined hole size, then the main memory undergoes compaction. This predefined hole size is termed as the hole size threshold.e.g. If we have two holes of size ‘x’ and size ‘y’ respectively and the hole threshold is 4KB, then compaction is done provided x<= 4KB and y<= 4KB

Total hole percentage: The total hole percentage refers to the percentage of total hole size over memory size. Only if it exceeds the designated threshold is compaction undertaken.

e.g. taking the two holes with size ‘x’ and size ‘y’ respectively, total hole percentage threshold equal to 6%, then for a RAM size of 32MB, compaction is done only if (x+y) >= 6% of 32MB.

Page 26: Lecture 5

Related ProblemsFragmentation – External Fragmentation

Another possible solution to the external fragmentation problem is to permit the physical address space of a process to be noncontiguous, thus allowing a process to be allocated physical memory wherever the latter is available. One way of implementing this solution is through the use of a paging scheme.

Paging entails division of physical memory into many small equal-sized frames. Logical memory is also broken into blocks of the same size called pages. When a process is to be executed, its pages are loaded into any available memory frames. On using a paging scheme, external fragmentation can be eliminated totally.

Paging is discussed in details in the next lecture.

Page 27: Lecture 5

Related ProblemsFragmentation – Internal Fragmentation

Consider a hole of 18,464 bytes as shown in the figure. Suppose that the next process requests 18,462 bytes. If we allocate exactly the requested block, we are left with a hole of 2 bytes. The overhead to keep track of this hole will be substantially larger than the hole itself. The general approach is to allocate very small holes as part of the larger request.

operating system

P7

P43

Internal fragmentation

Hole of 18,464 bytes

Next request is for 18,462 bytes

Page 28: Lecture 5

Related ProblemsFragmentation – Internal Fragmentation

As illustrated in the previous slide, the allocated memory may be slightly larger then the requested memory. The difference between these two numbers is internal fragmentation – memory that is internal to a partition, but is not being used.

In other words, unused memory within allocated memory is called internal fragmentation.

Page 29: Lecture 5

Memory Placement Algorithms As seen earlier, while swapping processes in

and out of the RAM, holes are created. In general, there is at any time a set of holes, of various sizes, scattered throughout memory.

When a process arrives and needs memory, we search the set of holes for a hole that is best suited for the process.

The following slide describes three algorithms that are used to select a free hole.

Page 30: Lecture 5

Memory Placement AlgorithmsThe three placement algorithms are: First-fit: Allocate the first hole that is big enough. Best-fit: Allocate the smallest hole that is big

enough. Worst-fit: Allocate the largest hole.

Simulations have shown that both first-fit and best-fit are better than worst-fit in terms of decreasing both time and storage utilization. Neither first-fit nor best-fit is clearly the best in terms of storage utilization, but first-fit is usually faster.

Page 31: Lecture 5

Continuous Memory Allocation Scheme

The continuous memory allocation scheme entails loading of processes into memory in a sequential order.

When a process is removed from main memory, new processes are loaded if there is a hole big enough to hold it.

This algorithm is easy to implement, however, it suffers from the drawback of external fragmentation. Compaction, consequently, becomes an inevitable part of the scheme.

Page 32: Lecture 5

Continuous Memory Allocation SchemeParameters Involved

Memory size RAM access time Disc access time Compaction thresholds

Memory hole-size threshold Total hole percentage

Memory placement algorithms Round robin time slot

Page 33: Lecture 5

Continuous Memory Allocation SchemeEffect of Memory Size

As anticipated, greater the amount of memory available, the higher would be the system performance.

Page 34: Lecture 5

Continuous Memory Allocation SchemeEffect of RAM and disc access times

RAM access time and disc access time together define the transfer rate in a system.

Higher transfer rate means less time it takes to move processes from main memory to secondary memory and vice-versa thus increasing the efficiency of the operating system.

Since compaction involves accessing the entire RAM twice, a lower RAM access time will translate to lower compaction times.

Page 35: Lecture 5

Continuous Memory Allocation SchemeEffect of Compaction Thresholds

Optimal values of hole size threshold largely depend on the size of the processes since it is these processes that have to be fit in the holes.

Thresholds that lead to frequent compaction can bring down performance at an accelerating rate since compaction is quite expensive in terms of time.

Threshold values also play a key role in determining state of fragmentation present.

Its effect on system performance is not very straightforward and has seldom been the focus of studies in this field.

Page 36: Lecture 5

Continuous Memory Allocation SchemeEffect of Memory Placement Algorithms

Simulations have shown that both first-fit and best-fit are better than worst-fit in terms of decreasing both time and storage utilization.

Neither first-fit nor best fit is clearly best in terms of storage utilization, but first-fit is generally faster.

Page 37: Lecture 5

Continuous Memory Allocation SchemeEffect of Round Robin Time Slot

As depicted in the figures on the next slide, best choice for the value of time slot would be corresponding to the transfer time for a single process. For example, if most of the processes required 2ms to be transferred, then a time slot of 2ms would be ideal. Hence, while one process completes execution, another can be transferred.

However, the transfer times for the processes in consideration are seldom a normal or uniform distribution. The reason for the non-uniform distribution is that there are many different types of processes in a system. The variance as depicted in the figure is too much in a real system and makes the choice of time slot a difficult proposition to decide upon.

Page 38: Lecture 5

Continuous Memory Allocation SchemeEffect of Round Robin Time Slot

Ideal Process Size Graph

Time slot corresponding to this size transfer time

Process Size

# o

f p

rocesses

# o

f p

rocesses

Process Size

Realistic Process Size Graph

Page 39: Lecture 5

Continuous Memory Allocation SchemePerformance Measures

Average Waiting Time Average Turnaround Time CPU utilization CPU throughput Memory fragmentation percentage over

time This is a new performance measure and it

quantifies compaction cost. It is calculated as a percentage of

compaction times versus the total time.

Page 40: Lecture 5

Continuous Memory AllocationImplementation

As part of Assignment 3, you’ll implement a memory manager system within an operating system satisfying the given requirements. (For complete details refer to Assignment 3)

We’ll see a brief explanation of the assignment in the following slides.

Page 41: Lecture 5

Continuous Memory AllocationImplementation Details Following are some specifications of the

memory manager system you’ll implement: A continuous memory allocation scheme is used. The PCB’s are to be executed based on a round

robin mechanism. The main memory size is 32 MB. The job sizes vary between 20 KB -> 2 MB.

(Uniform Random Distribution, Multiple of 20 KB).

The Disc capacity is 500 MB, initially 50 % full with jobs.

Page 42: Lecture 5

Continuous Memory AllocationImplementation Details

Use First Fit, Best Fit, and Worst Fit Techniques (should be a variable).

Do compaction when fragmentation is more than 6 % and holes are 50 KB or less (Assume memory access time = 14 x 10-9 seconds).

Use a varying time slot (a variable parameter, multiple of 1M.S).

Disc access time = 1ms + (jobsize (in bytes)/ 500000) ms

Job execution time ranges between 2ms and 10ms (multiple of 1ms).

Page 43: Lecture 5

Continuous Memory AllocationImplementation Details

Once you’re done with the implementation, think of the problem from an algorithmic design point of view. The implementation involves many parameters such as: Memory Size Disc access time Time slot for RR Compaction Thresholds RAM access time Fitting algorithm

Page 44: Lecture 5

Continuous Memory AllocationImplementation Details

The eventual goal would be to optimize several performance measures (enlisted earlier)

Perform several test runs and write a summation indicating how sensitive are some of the performance measures to some of the above parameters

Page 45: Lecture 5

Continuous Memory Allocation Sample Screenshots of Simulation

Setting variable parameters

Page 46: Lecture 5

Continuous Memory Allocation Sample Screenshots of Simulation

Initial Hard Disc Configuration

Page 47: Lecture 5

Continuous Memory Allocation Sample Screenshots of Simulation

Initial RAM Configuration

Page 48: Lecture 5

Continuous Memory Allocation Sample Screenshots of Simulation

Memory Manager In Execution

Page 49: Lecture 5

Continuous Memory Allocation Sample Screenshots of Simulation

Compaction Scenario

Page 50: Lecture 5

Continuous Memory Allocation Sample Screenshots of Simulation

Final Performance Measures For The Run

Page 51: Lecture 5

Continuous Memory Allocation Sample tabulated data from simulation

Time Slot

Average Waiting Time

Average Turnaround Time

CPU Utilization

Throughput Measure

Memory fragmentation percentage

2 3 4 5% 5 29%

3 4 4 2% 8 74%

4 5 6 3% 12 74%

5 12 12 1% 17 90%

TABLE: Round Robin Time Quantum vs. Performance Measures

Page 52: Lecture 5

Continuous Memory Allocation Sample tabulated data from simulation

RR Time Slot

Average Turnaround Time

Average Waiting Time

CPU Utilization Throughput Fragmentation%

First fit

Best fit

Worst fit

First fit

Best fit

Worst fit

First fit

Best fit

Worst fit

First fit

Best fit

Worst fit

First fit

Best fit

Worst fit

2 4 3 3 3 2 2 1%

1%

1% 5 5 5 82 74

74

3 4 4 4 4 4 4 2%

2%

2% 8 8 8 74 74

74

4 6 6 6 5 6 6 3%

2%

2% 12 11 11 74 74

74

5 12 6 6 12 5 5 1%

2%

2% 17 14 14 90 79

79

Page 53: Lecture 5

Continuous Memory Allocation Sample Graph (using data from simulation)

Effect of Round Robin Time Quantum over Performance Measures

0

2

4

6

8

10

12

14

16

18

2 3 4 5

Time Slot

Average WaitingTime

AverageTurnaround Time

CPU Utilization

Throughput

MemoryFragmentationPercentage

Page 54: Lecture 5

Continuous Memory Allocation Sample Graph (comparing memory algorithms)

Average Turnaround Time vs. Round Robin Time slot for three memory

placement algorithms

0

5

10

15

1 2 3 4

Round Robin Time Slot

Ave

rag

e T

urn

aro

un

d

Tim

eAverageTurnaroundTime First-fit

AverageTurnaroundTime Best-fitAverageTurnaroundTime Worst-fit

2 3 4 5

Comparing Memory Placement Algorithms: Average Turnaround time

Page 55: Lecture 5

Continuous Memory Allocation Sample Graph (comparing memory algorithms)

Average Waiting Time vs. Round Robin Time Slot for three memory

placement algorithms

02468

101214

1 2 3 4Round Robin Time

Slot

Ave

rag

e W

ait

ing

Tim

e AverageWaiting TimeFirst-fit

AverageWaiting TimeBest-fit

AverageWaiting TimeWorst-fit

2 3 4 5

Comparing Memory Placement Algorithms:Average Waiting Time

Page 56: Lecture 5

Continuous Memory Allocation Sample Graph (comparing memory algorithms)

CPU utilization vs. Round Robin Slot for three memory placement

algorithms

0%

1%

1%

2%

2%

3%

3%

4%

1 2 3 4Round Robin Time Slot

CP

U u

tilizati

on

CPU utilizationFirst-fit

CPU utilizationBest-fit

CPU utilizationWorst-fit

2 3 4 5

Comparing Memory Placement Algorithms:CPU utilization

Page 57: Lecture 5

Continuous Memory Allocation Sample Graph (comparing memory algorithms)

Throughput vs. Round Robin Time Slot for three memory placement

algorithms

0

5

10

15

20

1 2 3 4Round Robin Time

Slot

Th

ro

ug

hp

ut

ThroughputFirst-fit

ThroughputBest-fit

ThroughputWorst-fit

2 3 4 5

Comparing Memory Placement Algorithms:Throughput

Page 58: Lecture 5

Continuous Memory Allocation Sample Graph (comparing memory algorithms)

% Fragmentation vs. Round Robin Time Slot for three memory

placement algorithms

0%

20%

40%

60%

80%

100%

1 2 3 4Round Robin Time Slot

%Fra

gm

en

tati

on

Fragmentation% First-fit

Fragmentation% Best-fit

Fragmentation% Worst-fit

2 3 4 5

Comparing Memory Placement Algorithms:% Fragmentation

Page 59: Lecture 5

Continuous Memory Allocation Fragmentation percentage over time

Fragmentation percentage over time

0.00

5.00

10.00

15.001 5 9 13 17

Time window

% F

rag

me

nta

tio

n

Time Slot = 2

Time Slot = 3

Time Slot = 4

Time Slot = 5

Page 60: Lecture 5

Continuous Memory Allocation Conclusions from the sample simulation

The following emerged as the studied optimizing parameters: Optimal value of the round robin

quantum None of the memory placement

algorithms could be termed as optimal. Studying the fragmentation percentage

over time gave us the probable time windows where compaction was undertaken.

Page 61: Lecture 5

Lecture Summary Introduction to Memory Management

What is memory management Related Problems of Redundancy,

Fragmentation and Synchronization Memory Placement Algorithms Continuous Memory Allocation Scheme Parameters Involved Parameter-Performance Relationships Some Sample Results

Page 62: Lecture 5

Preview of next lectureThe following topics shall be covered in the next

lecture: Introduction to Paging

Paging Hardware & Page Tables Paging model of memory Page Size

Paging versus Continuous Allocation Scheme Multilevel Paging Page Replacement & Page Anticipation

Algorithms Parameters Involved Parameter-Performance Relationships Sample Results


Recommended