+ All Categories
Home > Documents > Memory Management

Memory Management

Date post: 01-Jan-2016
Category:
Upload: stephen-navarro
View: 12 times
Download: 0 times
Share this document with a friend
Description:
Memory Management. G.Anuradha. Outline. Background Logical versus Physical Address Space Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging. Background. Program must be brought into memory and placed within a process for it to be executed. - PowerPoint PPT Presentation
65
Memory Management G.Anuradha
Transcript
Page 1: Memory Management

Memory Management

G.Anuradha

Page 2: Memory Management

Outline

Background Logical versus Physical Address Space Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging

Page 3: Memory Management

Background

Program must be brought into memory and placed within a process for it to be executed.

Input Queue - collection of processes on the disk that are waiting to be brought into memory for execution.

User programs go through several steps before being executed.

Page 4: Memory Management

Virtualizing Resources Physical Reality: Processes/Threads share the same hardware

Need to multiplex CPU (CPU Scheduling) Need to multiplex use of Memory (Today)

Why worry about memory multiplexing? The complete working state of a process and/or kernel is defined by its

data in memory (and registers) Consequently, cannot just let different processes use the same memory Probably don’t want different processes to even have access to each

other’s memory (protection)

Page 5: Memory Management

Important Aspects of Memory Multiplexing Controlled overlap:

Processes should not collide in physical memory Conversely, would like the ability to share memory when desired (for

communication)

Protection: Prevent access to private memory of other processes

Different pages of memory can be given special behavior (Read Only, Invisible to user programs, etc)

Kernel data protected from User programs

Translation: Ability to translate accesses from one address space (virtual) to a different one

(physical) When translation exists, process uses virtual addresses, physical memory

uses physical addresses

Page 6: Memory Management

Names and Binding

Symbolic names Logical names Physical names Symbolic Names: known in a context or path

file names, program names, printer/device names, user names

Logical Names: used to label a specific entity inodes, job number, major/minor device numbers, process

id (pid), uid, gid.. Physical Names: address of entity

inode address on disk or memory entry point or variable address PCB address

Page 7: Memory Management
Page 8: Memory Management

Binding of instructions and data to memory

Address binding of instructions and data to memory addresses can happen at three different stages.

Compile time: If memory location is known apriori, absolute code can be

generated; must recompile code if starting location changes. Load time:

Must generate relocatable code if memory location is not known at compile time.

Execution time: Binding delayed until runtime if the process can be moved

during its execution from one memory segment to another. Need hardware support for address maps (e.g. base and limit registers).

Page 9: Memory Management

Binding time tradeoffs

Early binding compiler - produces efficient code allows checking to be done early allows estimates of running time and space

Delayed binding Linker, loader produces efficient code, allows separate compilation portability and sharing of object code

Late binding VM, dynamic linking/loading, overlaying, interpreting code less efficient, checks done at runtime flexible, allows dynamic reconfiguration

Page 10: Memory Management

Multi-step Processing of a Program for Execution Preparation of a program for execution involves

components at: Compile time (i.e., “gcc”) Link/Load time (unix “ld” does link) Execution time (e.g. dynamic libs)

Addresses can be bound to final values anywhere in this path Depends on hardware support Also depends on operating system

Dynamic Libraries Linking postponed until execution Small piece of code, stub, used to locate appropriate

memory-resident library routine Stub replaces itself with the address of the routine,

and executes routine

Page 11: Memory Management
Page 12: Memory Management
Page 13: Memory Management
Page 14: Memory Management
Page 15: Memory Management
Page 16: Memory Management

Dynamic Loading

Routine is not loaded until it is called. Better memory-space utilization; unused

routine is never loaded. Useful when large amounts of code are

needed to handle infrequently occurring cases.

No special support from the operating system is required; implemented through program design.

Page 17: Memory Management

Dynamic Linking

Linking postponed until execution time. Small piece of code, stub, used to locate the

appropriate memory-resident library routine. Stub replaces itself with the address of the

routine, and executes the routine. Operating system needed to check if routine

is in processes’ memory address.

Page 18: Memory Management

Overlays

Keep in memory only those instructions and data that are needed at any given time.

Needed when process is larger than amount of memory allocated to it.

Implemented by user, no special support from operating system; programming design of overlay structure is complex.

Page 19: Memory Management

Overlaying

Page 20: Memory Management

Memory Partitioning

An early method of managing memory Pre-virtual memory which is not used much now Virtual Memory

But, it will clarify the later discussion of virtual memory if we look first at partitioning Virtual Memory has evolved from the partitioning

methods

Page 21: Memory Management

Types of Partitioning

Fixed Partitioning Dynamic Partitioning Simple Paging Simple Segmentation Virtual Memory Paging Virtual Memory Segmentation

Page 22: Memory Management

Fixed Partitioning- Equal size partitions Any process whose size is less than or equal to the partition size can be loaded into an available partitionThe operating system can swap a process out of a partition

If none are in a ready or running state

Page 23: Memory Management

Fixed Partitioning Problems

A program may not fit in a partition. The programmer must design the program with

overlays Main memory use is inefficient.

Any program, no matter how small, occupies an entire partition.

This is results in internal fragmentation.Fragmentation generally happens when the memory blocks have been allocated and are freed randomly. This results in splitting of a partitioned memory (on the disk or in main memory) into smaller non-contiguous fragments.

Page 24: Memory Management

Fixed partitioning – Unequal Size Partitions Lessens both problems

but doesn’t solve completely In Fig

Programs up to 16M can be accommodated without overlay

Smaller programs can be placed in smaller partitions, reducing internal fragmentation

Page 25: Memory Management

Placement Algorithm

Equal-size Placement is trivial (no options)

Unequal-size Can assign each process to the smallest partition

within which it will fit Queue for each partition Processes are assigned in such a way as to

minimize wasted memory within a partition

Page 26: Memory Management

Fixed Partitioning

Page 27: Memory Management

Remaining Problems with Fixed Partitions

The number of active processes is limited by the system i.e limited by the pre-determined number of

partitions A large number of very small process will not

use the space efficiently In either fixed or variable length partition methods

OBSOLETEEarly IBM Mainframe OS, OS/MFT

Page 28: Memory Management

Dynamic Partitioning

Partitions are of variable length and number Process is allocated exactly as much memory

as required

Page 29: Memory Management

Dynamic Partitioning Example

External Fragmentation Memory external to all

processes is fragmented Can resolve using

compaction OS moves processes so that

they are contiguous Time consuming and wastes

CPU time

OS (8M)

P1 (20M)

P2(14M)

P3(18M)

Empty (56M)

Empty (4M)

P4(8M)

Empty (6M)

P2(14M)

Empty (6M)

Refer to Figure 7.4

Page 30: Memory Management

Dynamic Partitioning

Operating system must decide which free block to allocate to a process

Best-fit algorithm Chooses the block that is closest in size to the

request Worst performer overall Since smallest block is found for process, the

smallest amount of fragmentation is left Memory compaction must be done more often

Page 31: Memory Management

Dynamic Partitioning

First-fit algorithm Scans memory from the beginning and chooses

the first available block that is large enough Fastest May have many process loaded in the front end of

memory that must be searched over when trying to find a free block

Page 32: Memory Management

Dynamic Partitioning

Next-fit Scans memory from the location of the last

placement More often allocates a block of memory at the end

of memory where the largest block is found The largest block of memory is broken up into

smaller blocks Compaction is required to obtain a large block at

the end of memory

Page 33: Memory Management

Worst Fit:- Allocate the largest hole. Produces the largest leftover hole, which may be

more useful than the smaller leftover hole from a best-fit approach.

Dynamic Partitioning

Page 34: Memory Management

4

Memory Allocation Policies

Example: Parking Space Management A scooter, car and a truck are looking out for

space for parking. They arrive in the order mentioned above. The parking spaces are available for each one of them as per their size. Truck parking space can accommodate , a car and a scooter or even two scooters. Similarly, In a car parking space two scooters can be parked.

Page 35: Memory Management

5

Memory Allocation Policies

Alongside is shown the partition in the parking area for Truck, Car and Scooter.

Now when a scooter, car and truck come in order, parking space is allocated according to algorithm policies

Page 36: Memory Management

Worst Fit Best Fit

Next Fit

Page 37: Memory Management

Memory Allocation Policies

Now take another theoretical example.Given the partition of

100K,500K,200K,300K,600K as shown, the different algorithms will place the processes 212K,417K,112K,426K respectively.

• The request for 426K will be rejected in case of next fit and worst fit algorithm because any single partition is less than 426K

Page 38: Memory Management

Next Fit (212k,417k,112k)

Similarly we cant implement for Other Two Policies

The request for 426K will be rejected

Page 39: Memory Management

Next Fit

212K-Green417K-Blue112K-Pink426K-YellowExternal Fragmentation-GrayUnused Partitions-White

Best Fit Worst Fit

7

Page 40: Memory Management

User selects the algorithm in order of worst fit, best fit and, next fit.

User input = 300K

process100K

sizes

400K

100K

200K

300K

500K

600K

100K

200K

300K

500K

600K

300K

Worst Fit

400K

100K

300K

Best Fit

100K

200K

300K

500K

600K

300K

100K

400K

100K

Next Fit

100K

200K

300K

500K

600K

300K

100K

400K

100K

200K

300K

500K

600K

300K

100K

400K

300K

Worst Fit

100K

200K

300K

500K

600K

300K

100K

400K

100K

Best fit

100K

200K

300K

500K

600K

300K

100K

400K

Next Fit

Page 41: Memory Management

Empty MemoryUser entered 5 processes which allotted in memory

Dynamic memory Partitioning with User interactivity

Page 42: Memory Management

User entered 5 processes allotted in memory

User Selected Best Fit

Best Fit

450K

External Fragmentation

450 k

New process given by user

Page 43: Memory Management
Page 44: Memory Management

Worst Fit

User Selected Worst FitUser entered 5 processes allotted in memory

450 k

New process given by user

450K

350KExternal Fragmentation

Page 45: Memory Management

Next Fit

User Selected Next FitUser entered 5 processes allotted in memory

450K

450 k

New process given by user

External Fragmentation

Page 46: Memory Management

9

Memory Allocation Policies

Which is the best placement algorithm with respect to fragmentation?

Worst-fit algorithm is the best placement algorithm with respect to fragmentation because it results in less amount of fragmentation.

Which is the worst placement algorithm respect to time complexity ?

Best-fit is the worst placement algorithm respect to time complexity because it scans the entire memory space resulting in more time.

Page 47: Memory Management

Best-fit, first-fit, and worst fit-memory allocation method for fixed partitioning

List of Jobs Size Turnaround

Job 1 100k 3

Job 2 10k 1

Job 3 35k 2

Job 4 15k 1

Job 5 23k 2

Job 6 6k 1

Job 7 25k 1

Job 8 55k 2

Job 9 88k 3

Job 10 100k 3

Page 48: Memory Management

Memory Block Size

Block 1 50k Block 2 200k Block 3 70k Block 4 115k Block 5 15k

Best-fit memory allocation makes the best use of memory space but slower in making allocation

jobs 1 to 5 are submitted and be processed

first

After the first cycle, job 2 and 4 located on block 5 and block 3 respectively and both having one turnaround are replace

by job 6 and 7 while job 1, job 3 and job 5 remain on their designated block.

In the third cycle, job 1 remain on block 4, while job 8 and job 9

replace job 7 and job 5 respectively

Page 49: Memory Management

F I R S T - F I T First-fit memory allocation is faster in making allocation but leads to memory waste. Scans memory from the beginning and chooses the first available block that is large enough

Page 50: Memory Management

W O R S T - F I T Worst-fit memory allocation is opposite to best-fit. It allocates free available block to the new job and it is not the best choice for an actual system

Page 51: Memory Management

Example

Given memory partitions of 100K, 500K, 200K, 300K, and 600K (in order), how would each of the First-fit, Best-fit, and Worst-fit algorithms place processes of 212K, 417K, 112K, and 426K (in order)? Which algorithm makes the most efficient use of memory?

Page 52: Memory Management

First-fit: 212K is put in 500K partition 417K is put in 600K partition 112K is put in 288K partition (new partition 288K = 500K - 212K) 426K must wait Best-fit: 212K is put in 300K partition 417K is put in 500K partition 112K is put in 200K partition 426K is put in 600K partition Worst-fit: 212K is put in 600K partition 417K is put in 500K partition 112K is put in 388K partition (600K – 212K) 426K must wait In this example, Best-fit turns out to be the best.

Page 53: Memory Management

Comparison between algorithms Best is depended on the exact sequence of

process swappings that occur and the size of those processes

First-fit-best and fastest Next-fit-slightly worse results than the first-fit.

A free block is produced at the end of memory

Best fit-worst performer

Page 54: Memory Management

Buddy System-Compromise between fixed and dynamic partitioning Entire space available is treated as a single

block of 2U

If a request of size s where 2U-1 < s <= 2U

entire block is allocated Otherwise block is split into two equal

buddies Process continues until smallest block greater

than or equal to s is generated

Page 55: Memory Management

Example of Buddy System

Page 56: Memory Management

Tree Representation of Buddy System

Page 57: Memory Management

Replacement algorithm

When all of the processes in main memory are in a blocked state and there is insufficient memory even after compaction, OS swaps one of the processes out of main memory to accommodate new process

Page 58: Memory Management

Swapping A process can be swapped temporarily out of memory to a backing

store, and then brought back into memory for continued execution

Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images

Roll out, roll in – swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed

Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped

Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows)

System maintains a ready queue of ready-to-run processes which have memory images on disk

Page 59: Memory Management

Schematic View of Swapping

Page 60: Memory Management

Swapping contd…

Context switch time is fairly high Eg: User Process-100MB Transfer rate-50MB/sec(2 sec) if latency is 8 millisec then swap time to

and fro is 4016ms. Never swap a process with pending I/O

Page 61: Memory Management

Relocation

When program loaded into memory the actual (absolute) memory locations are determined

A process may occupy different partitions which means different absolute memory locations during execution Swapping Compaction

Page 62: Memory Management

Addresses

Logical Reference to a memory location independent of

the current assignment of data to memory. Relative

Address expressed as a location relative to some known point.

Physical or Absolute The absolute address or actual location in main

memory.

Page 63: Memory Management

Relocation

Page 64: Memory Management

Registers Used during Execution Base register

Starting address for the process Bounds register

Ending location of the process These values are set when the process is

loaded or when the process is swapped in

Page 65: Memory Management

Registers Used during Execution The value of the base register is added to a

relative address to produce an absolute address

The resulting address is compared with the value in the bounds register

If the address is not within bounds, an interrupt is generated to the operating system


Recommended