+ All Categories

OS Ch 4

Date post: 31-Mar-2016
Category:
Upload: abit-byte
View: 213 times
Download: 0 times
Share this document with a friend
Description:
OS made easy Ch 4
Popular Tags:
17
Chapter 4 In this Chapter Address Binding And Single Absolute Partition Single Relocatable Partition Multi Programming Multiple Partitions Simple Paging Simple Segmentation Segmentation With Paging Managing Page And Segment Tables Associative Memory Inverted Page Table Swapping Overlaying Raza’s Simplified Understanding Operating Systems MEMORY MANAGEMENT
Transcript
Page 1: OS Ch 4

Chapter 4

In this Chapter

Address Binding And Single Absolute Partition

Single Relocatable Partition

Multi Programming

Multiple Partitions

Simple Paging

Simple Segmentation

Segmentation With Paging

Managing Page And Segment Tables

Associative Memory

Inverted Page Table

Swapping

Overlaying

Raza’s Simplified

Understanding Operating Systems

MEMORY

MANAGEMENT

Page 2: OS Ch 4

1 OPERATING SYSTEMS – Raza’s Simplified

Introduction

While executing one or more programs, OS must exist in memory to supervise the

execution. The OS is loaded at the time of booting and remains in memory till the

system is shutdown. A part of memory is reserved for OS. The rest of memory is

divided into small areas where a process runs in each area. These areas are called

partitions. OS memory management is responsible for running parallel processes and

protecting each process’ memory from the other processes. It also protects OS

memory from user programs.

Memory management cannot happen without hardware support. OS must works with

hardware to make the memory management a practical thing. That’s why, memory

management is hardware and software design issue. And memory management

hardware is kept away from any user access.

Address Binding and Single Absolute Partition

Logical and Physical Addresses

The addresses used in a process are called logical addresses. They are references to

the parts of programs or its data. For example, a logical address can be a:

● Variable

● Function/Procedure name

● File name

● Library name

● Other

When a process is run, it becomes resident in RAM. RAM is called the physical

memory, and an address in it is called physical address. Every logical address actually

references to a specific physical address

The process of translating logical addresses into physical addresses is called address

binding. Binding can occur at either compile time or load/runtime

Page 3: OS Ch 4

Chapter 4 – Memory management 2

Compile Time Binding

This type of binding occurs when the source code is translated into object code. The

program may not run immediately. This binding allocates physical memory addresses

to every logical address. Due to this reason, the process to execute must be loaded at

the specified physical addresses. Such a code is called Absolute code. What will

happen if those addresses are not available in physical memory?

Load Time Binding

Load time binding is performed when the process is loaded in memory. Such

a code is called relocatable code and it can be loaded anywhere in

memory.

The overhead involved is that we must specify some mechanism to know which byte

in process refers to which byte in memory, i.e. we need to know how to translate a

logical address to physical address.

Fig 4.1

Address binding and single

absolute partit

Page 4: OS Ch 4

3 OPERATING SYSTEMS – Raza’s Simplified

Adding Protection to the Code

In early days of computing, no protection was provided by the OS to processes. The

memory was simply divided in two parts, one for OS and the other for a user process.

Only one process could run at a time. It was the responsibility of the running user

process not to interfere with the OS partition. This limit was not enforced by any

means so the processes could ignore it. Mostly, a user process was able to access OS

memory and crashes the OS. No hardware support was provided to prevent a process

from this type of actions. DOS was built in such a way.

Base Register is a hardware register which can provide basic protection to the

processes. This register contains the start address of the process run by the user. It is

the smallest possible address accessible by the user process. So, when the process

generates a physical memory address, it is first compared to the base register. If the

give address is less than the base register’s contents, then it is ignored and a trap is

signalled. Such a trap is called memory fault trap.

Single Relocatable Partition

Single relocatable partition introduces the relocation register. This register also

contains start address of the process. But, its contents are added to the process’s

generated address. They are not compared. Obviously, the result will be larger than

the contents of the relocation register, as memory does not contain negative

addresses.

Now the process has a logical address space. The logical address space is the collection

of all addresses accessible to the process. The compilation of the program takes place

assuming that it would be loaded at address 0. Memory management hardware

performs the translation from logical to physical address.

Fig 4.2

Trap addressing error

Page 5: OS Ch 4

Chapter 4 – Memory management 4

Multi Programming

Single absolute or single relocatable partition schemes are limited. They can run only

one process at a time. Such OS are not of much use in today’s interactive computing

environment.

To run multiple processes at a time and to run them concurrently is called

multiprogramming. We need a memory management scheme which would divide the

memory into many partitions.

Multiple Partitions

These schemes allow more than one processes to reside and run simultaneously. For

every process a memory partition is allocated. For such a scheme, we need two

registers:

● Relocation Register

● Limit/size register, this register indicates the end of the process or it

contains size of the process.

Multiple Fixed Partitions

Here fixed means that the size of partitions cannot be changed dynamically.

The partition size can be equal for all partitions or it may differ from partition to

partition.

If the partition size is different, then OS needs to store it for every process.

If the size of partition is same, then the OS needs to store it only once.

A special data structure called partition table stores the process and its allocated

Fig 4.3

Single Relocatable Partition

Page 6: OS Ch 4

5 OPERATING SYSTEMS – Raza’s Simplified

partition. This table stores the partition number allocated to the process or the first

address of each process.

The size of partition is preferably kept a power of 2.

When the size of partition is kept a power p of 2, then address translation is simple.

The hardware concatenates p 0s to the end of partition number to get the start

address of partition. The partition number is obtained from partition table.

The limit register sores the size of the partition and it is set at the system boot time. At

the time of context switch of the CPU, the new process is loaded into the CPU and the

relocation register is also reset.

Sometimes it happens that the size of a process is less than the size of its allocated

partition. So, some space is wasted.

The space wasted within a partition is called internal fragmentation.

To reduce internal fragmentation, different sizes of partitions are provided. The OS

must also load the limit register with new value at the context switch. This partition

size can be stored in the partition table or it can be calculated from the partition

number.

OS also selects a partition to load a process. OS cannot load a process on a partition

smaller than the size of the process. But, what to do when two large partitions are

available and process can be loaded on any of them? Or how to find a correct sized

partition suitable for the process? The OS uses partition selection algorithms to select

a suitable partition.

Multiple Variable Partitions

In this scheme, the OS does not divide the memory before process allocation. Rather it

is done dynamically as the processes arrive. The memory is allocated according to the

process execution needs. Processes can be loaded anywhere where enough memory is

available.

Internal fragmentation does not occur. But OS manages much more data about

processes and their partitions, including:

1. The exact start address of a process

2. The exact end address of a process

Page 7: OS Ch 4

Chapter 4 – Memory management 6

3. Data about free memory areas

When processes are running, they eventually terminate and release their partitions as

unused memory areas. After sometime, the memory contains many unused and used

areas spread across it randomly/alternatively. This phenomenon is known as checker

boarding.

The unused space not within any partition is called external fragmentation.

Unused areas are also called memory holes.

Checker boarding results in scattered holes. This leaves less ‘contiguous unused

memory’ for processes. Hence a process will not load in a smaller partition, although

the total unused memory may be larger than needed.

However, a process called compaction can be used and applied to organize all holes as

a single large hole. It can collect all small holes from memory and place them at a

single contiguous location. It requires physically movement of used areas to other

areas. So this process is quite costly, as it needs more processing and memory for its

own completion.

Recording Allocated and Non-Allocated Memory

Bitmap

The OS uses a bitmap to keep record which area of memory is in use and which

is not in use. Bitmap is a series of 0s and 1s. The OS divides memory in small

units. These units are called allocation units. Each allocation unit is associated

with a bit in the bitmap. The value of the bit indicates whether the allocation

Fig 4.4

Multiple Variable Partitions

Page 8: OS Ch 4

7 OPERATING SYSTEMS – Raza’s Simplified

unit is in use or it is not in use.

If we increase the size of allocation unit, then the size of bitmap will reduce. But

in this case, if the process size is not the integral multiple of the allocation unit

size, memory wastage will increase

Linked List

This method is used to keep track of free holes in memory. In this scheme, each

hole is modified to contain information about its own size and a pointer to the

next hole. The OS itself needs a pointer to the first hole in memory.

The list can be ordered in any one of the following ways:

1. Memory order

2. Size order

3. Unordered

Partition Selection Algorithms

If more than one hole in memory can accommodate a single waiting process,

then OS needs an algorithm to decide which hole to use.

These algorithms include:

1. First fit: Search to find a suitable hole begins from start of memory.

The waiting process is loaded into first available hole large enough to

contain the process. After loading the process in the hole, some

space may be leftover. This leftover space is considered for next

selection.

Disadvantage: Every time the search is made, the holes at the

beginning of the memory are first considered.

2. Next Fit: It starts searching for the needed hole after the previously

allocated hole’s location. So, the search begins not from the

beginning of memory.

Advantage: The holes in the beginning are not frequently searched.

3. Best Fit: It tries to find the hole exactly equal to the process size. If

not found, the next hole large enough to contain the process is used.

4. Worst Fit: It finds the largest hole and loads the process in it.

Page 9: OS Ch 4

Chapter 4 – Memory management 8

Advantage: After loading a process, the leftover space is not enough

to accommodate a new process. This problem is much prominent in

best fit algorithm. The selection of largest hole in worst fit improves

this situation as the leftover space is still enough even to

accommodate a new process.

Buddy Systems

The memory allocation is done as a power of 2. In the beginning, the complete

memory is a one single allocation unit. When a process is loaded in memory, the

memory is divided in two equal parts. These two parts are called buddies. Then

one buddy is again divided in two equal buddies. The division continues until

we get a buddy whose size is exactly equal to the size of the process or dividing

it further will give a size smaller than the process.

When two equal sized consecutive buddies, which were formed from a single

buddy are freed, they join back together. For example, if the size of joining

buddies is 2K each, then the size of new buddy will be 4K.

Maintaining Record of Buddies

If the memory size is 2N, then we can have allocation units of sizes from 21 to 2N.

The Buddy system needs N lists for free blocks, one list used for each size. The

use of separate lists makes it efficient for processing of both free and used

allocation units.

Buddy systems try to offer a compromise between fixed and variable partitions.

However, they suffer from both internal and external fragmentation.

Simple Paging

The physical memory is grouped in small units called page frames. Logical memory is

also grouped in small units called pages. The sizes of pages and page frames are equal

and fixed. The memory management hardware dictates the size of pages and page

frames.

While loading a process, its pages are loaded into free page frames and run. The page

frames can be non-contiguous.

Page 10: OS Ch 4

9 OPERATING SYSTEMS – Raza’s Simplified

Maintaining Record

For every process, a page table is maintained. Its primary purpose is to store which

page frame is allocated to which page. This table is used when physical address is

generated from logical address. For every page in the process, there is one entry/row

in the table. So, if the page has 100 entries, then there are 100 pages in the process.

First entry tells about first page, second entry tells about second page and so on.

Mechanics

The page size is kept to be a power of 2. This reduces complexities at the times of

address translations. A logical address contains two type of bits. In the beginning of it,

the bits specify the page number. Remaining last bits specify the offset in the page.

Offset or displacement is the location within individual pages counted from the

beginning of the page as 0. So, if L is the total number of bits in the logical address, and

2P is the size of page, then first L-P bits give the page number and last p bits give the

offset.

Translation

The translation of logical address to the physical address is simple and straight

forward. The page number is extracted from logical address. It is matched in the page

table to find its page frame. Then the offset is appended with the page frame number.

This all translation remains transparent to the process.

Fig 4.5

Simple Paging

Page 11: OS Ch 4

Chapter 4 – Memory management 10

Multi Levels in Page Tables

As the number of pages grows, the size of page table also increases. This is difficult to

manage. So, the page table is broken down into small parts, each presenting a level.

Now the page number in page table is not a simple page number. It contains more

sections. The first part of the page table is used as an index, in top most level table.

When the entry in that table is found, it is used to search the next lower level. The

second part of the page number is used to search in the second level page table. This

process is repeated until an entry in the bottom level table is found. That entry gives

the page frame number of the loaded page.

Tracking the Traps

Firstly, a size register can be used to determine which logical address would result in

an out of bound address trap.

Secondly, page table can also do this. It is the most commonly used option. If a process

has n pages, then first n entries of its page table are correct. All other entries do not

belong to it. So, first n entries in the page table are marked valid, all other are marked

as invalid. This is done by placing a valid/invalid bit field in the table.

Some systems also offer protection bits as the part of page table. But this scheme is

suited for systems using segments.

Fig 4.6

Multi levels in Page Tables

Page 12: OS Ch 4

11 OPERATING SYSTEMS – Raza’s Simplified

Simple Segmentation

In this scheme the logical memory i.e. the process is divided into smaller parts. These

parts are called segments. Their size is kept to be some power of 2. But their size is not

fixed. It can vary from segment to segment. The OS does not perform this division. The

user or the compiler does it. But hardware sets the maximum size a segment can have.

Keeping Record

The size of segments is variable. So the OS cannot predict when and where to divide

physical memory to contain a segment. It must maintain the lists of memory holes

available for an incoming process. The issues encountered here are similar to the

issues faced in multiple variable partitions.

A structure called segment table is used to keep track of segments of a process. This

table contains the first address of the segment and its size.

Translation

An offset/displacement is a location within a segment counted from the start of the

segment considering it location 0. If the segment size is 2m and a generated logical

address length is L, then first L-m bits in the logical address provide the segment

number. Last m bits provide the offset in segment.

The segment number and offset is taken from the logical address. The segment table

entry is found by comparing the segment number with segment table entries. If the

offset is larger than the size of segment, a trap named ‘invalid address fault’ is

returned, and the program is terminated. If the offset is correct then the translation is

done by summing the segment number and the offset together, hence a physical

address is found.

This process can be boosted up if the confirmation of size and physical address

generation is done in parallel.

Special Features of Segments

The user has control on segments. Thus, the user can add many convenient properties

to them:

Page 13: OS Ch 4

Chapter 4 – Memory management 12

Read Only Segments

A read only segment contains data which cannot be changed during processing. To

make a segment read only, an extra field called read only bit is included in the segment

table.

When a write is requested on a segment, then its read only bit is checked. If it is set,

meanings the segment is read only, the write operation is denied and a fault is

generated.

Sharing the Segments

Read only segments can be shared among many processes to reduce memory usage.

This is especially helpful when two processes are running from the same program

code.

Another use of shared segments is to load a subroutine library. Its read only access is

given to the processes. But this situation requires much care as the same segment

addresses will be utilized by many processes. Three types of addressing are used in

such situations:

5. Relative addressing

6. Indirect addressing

7. Direct addressing

Relative Addressing

This type of addressing gets the offset from the program counter and generates

the required address.

Fig 4.7

Simple Segmentation

Page 14: OS Ch 4

13 OPERATING SYSTEMS – Raza’s Simplified

Indirect Addressing

Indirect addressing uses a register which specifies the correct segment to use.

Direct Addressing

It uses segment and offset of the required address.

Segmentation with Paging

Segmentation and paging both have their advantages. Segmentation provides security

and sharing. Paging provides efficiency. If we combine both, we get dual advantages.

To do this, a segment is divided into equal sized pages and a page table is maintained

to keep record of pages in segment.

Normally a segment number and offset is contained in a logical address in a

segmented system. But in segmentation with paging, the segment offset has two

parts. First is the page number second is page offset. Segment table tells about the

address of the page table in the segment. To get page table entry, the page number

bits in the logical address are added to the page table address. Then, the page offset is

appended to the page frame number found in the page table to get its physical

address.

Note that this time the process will be divided in segments. Every segment will be

divided in pages. Thus every segment will need its own page table.

Managing Page and Segment Tables

Page tables are maintained in paged systems and segment tables are maintained in

segmented systems. These tables are consulted by the OS when a logical address

Fig 4.8

Simple Segmentation

Paging

Page 15: OS Ch 4

Chapter 4 – Memory management 14

needs to be translated into a physical address. Every process has its own page and

segment table. A special register for memory management is used and it contains

reference to the current process’s table. At the time of context switch, this register is

also updated to refer to the table of loaded process.

Associative Memory

We also call it translation look aside buffer. This memory is especially useful when the

size of page table is very large, i.e. the number of pages is high. It increases the search

performance.

Associative memory is a subset of page table containing most recently used pages and

their related information only. It is designed in the way that every element in it can be

searched in parallel to other elements. So if 1 microsecond is taken to search one

element, then to search 10, 20 or 25 elements 1 microsecond is needed. So, it is

extremely fast than page table.

Every element in it is given a number to identify it called an index. The OS gives the

search value to associative memory and gets the index.

Searching Process

When a page is searched, the page table and associative memory are used together. If

the page is in associative memory it is directly used and search in page table is

stopped. Since the associative memory is too fast, therefore this process completes

quickly.

But if the associative memory does not have the page information, then OS waits for

data from page table. When page table provides the information, it is used for

translation. Associative memory is then updated and this new information is also

stored in it by replacing some old one. It will improve future searches.

To measure the performance of associative memory, hit ratio is used. Hit ratio is the

ratio of accesses found in associative memory to the accesses not found in associative

memory. The system gives good performance if the hit ratio is high.

Precautions

At the time of context switch, the old entries in the associative memory cannot be

used for the new process. The OS does it by marking them invalid.

Page 16: OS Ch 4

15 OPERATING SYSTEMS – Raza’s Simplified

Second way is that associative memory assigns a field of process id and when an

access is done, the process id should match the current process to use an entry.

Inverted Page Table

If the paging systems are using large virtual address space and associative memory,

the inverted page table minimizes the size of page table.

It works in the reverse order. Its entry contains a page frame and page allocated to it.

When a page is not found in associative memory, the inverted page table is consulted.

If hashing is used then the procedure speeds up.

Swapping

In swapping a process is taken away from memory and stored on disk.

It is done when the RAM runs short and we need space to load some new processes or

data. This swapped out process is placed at a specific location on disk. A process can

be selected from ready or blocked queues for swapping. When there is space available

in memory, the process is returned. Swapping is done by medium term scheduler

called swapper. Swapping increases the number of processes using CPUs i.e. the

Fig 4.9

Associative Memory

Page 17: OS Ch 4

Chapter 4 – Memory management 16

degree of multiprogramming. In other words, CPU can handle more processes in a

given interval and its idle time is reduced.

Overlaying

In this scheme, the programmer defines two or more than two segments of a program.

They need not be in memory simultaneously to execute. The OS loads one overlay

segment or the other and executes it. So the physical memory needed is much less

than the program without overlaying. Segments can execute alternatively.

The only disadvantage is that the programmer is heavily involved. It is not easy and

even not reliable for a programmer to divide a process in this way. Programmer’s

knowledge and experience does counts a lot. A simple approach is that we do not

involve the programmer in making parts of the process and let the OS decide what to

do when memory is less than the process size. Such a scheme is virtual memory.

Fig 4.10

Overlaying


Recommended