+ All Categories
Home > Documents > Memory management

Memory management

Date post: 22-May-2015
Category:
Upload: rasi123
View: 84 times
Download: 0 times
Share this document with a friend
Popular Tags:
35
MEMORY MANAGEMENT 1. Keep track of what parts of memory are in use. 2. Allocate memory to processes when needed. 3. Deallocate when processes are done. 4. Swapping, or paging, between main memory and disk, when disk is too small to hold all current
Transcript
Page 1: Memory management

MEMORY MANAGEMENT

1. Keep track of what parts of memory are in use.

2. Allocate memory to processes when needed.

3. Deallocate when processes are done.

4. Swapping, or paging, between main memory and disk, when disk is too small to hold all current processes.

Page 2: Memory management

Memory hierarchy:

small amount of fast, expensive memory – cache some medium-speed, medium price main memory

gigabytes of slow, cheap disk storage

MMU - Memory Management Unit of the operating system handles the memory hierarchy.

Page 3: Memory management

Basic Memory ManagementMonoprogramming without Swapping or Paging

Three simple ways of organizing memory- an operating system with one user process

Page 4: Memory management

Multiprogramming with Fixed Partitions

• Fixed memory partitions– separate input queues for each partition– single input queue

Page 5: Memory management

Relocation and Protection

• Cannot be sure where program will be loaded in memory– address locations of variables, code routines cannot be absolute

– must keep a program out of other processes’ partitions

• Use base and limit values– address locations added to base value to map to physical addr

– address locations larger than limit value is an error

Page 6: Memory management

Swapping

Memory allocation changes as – processes come into memory– leave memory

Shaded regions are unused memory

Page 7: Memory management

• Allocating space for growing data segment• Allocating space for growing stack & data segment

Page 8: Memory management

Memory Management with Bit Maps

• Part of memory with 5 processes, 3 holes– tick marks show allocation units– shaded regions are free

• Corresponding bit map• Same information as a list

Page 9: Memory management

Memory Management with Linked Lists

Four neighbor combinations for the terminating process X

Page 10: Memory management

Algorithms for allocating memory when linked list management is used.

1. FIRST FIT - allocates the first hole found that is large enough - fast (as little searching as possible).

2. NEXT FIT - almost the same as First Fit except that it keeps track of where it last allocated space and starts from there instead of from the beginning - slightly better performance.

3. BEST FIT - searches the entire list looking for a hole that is closest to the size needed by the process - slow - also does not improve resource utilization because it tends to leave many very small ( and therefore useless) holes.

4. WORST FIT - the opposite of Best Fit - chooses the largest available hole and breaks off a hole that islarge enouggh to be useful (I.e. hold another process) - in practice has not been shown to work better than others.

Page 11: Memory management

FRAGMENTATIONAll the preceding algorithms suffer from:

External Fragmentation

As processes are loaded and removed from memory the free memory is broken into little pieces and enough total space exists to satisfy a request, but it is not contiguous.

Solutions:

•Break memory into fixed-sized blocks and allocate in units of block sizes. Since the allocation will always be slightly larger than the process, some Internal Fragmentation still results.

•Compaction: move all processes to one end of memory and holes to the other end. Expensive and can only be done when relocation is done at execution time, not at load time.

Page 12: Memory management

PAGING another solution to external fragmentation

Paging is a memory management scheme that permits the physical address space to be noncontiguous.

•Used by most operating systems today in one of its various forms.

•Traditionally handled by hardware, but recent designs implement paging by closely integrating the hardware and operating system.

•Every address generated by the CPU is divided into two parts: the page number and the offset.

•Addressing in a virtual address space of size: 2m, with pages of size: 2n , uses the high order m-n bits for the page number and the n low order bits for the offset.

•A Page Table is used where the page number is the index and the table contains the base address of each page in physical memory.

Page 13: Memory management

Virtual Memory

The position and function of the MMU

Page 14: Memory management

PAGING

The relation betweenvirtual addressesand physical memory addres-ses given bypage table

Page 15: Memory management

32 bit address with 2 page table fields

Two-level Page Tables

Page 16: Memory management

Page Replacement Algorithms

When a page fault occurs, the operating system must choose a page to remove from memory to make room for the page that has to be brought in.

•On the second run of a program, if the operating system kept track of all page references, the “Optimal Page Replacement Algorithm” could be used:

replace the page that will not be used for the longest amount of time. This method is impossible on the first run and not used in practice. It is used in theory to evaluate other algorithms.

Page 17: Memory management

•Not Recently Used Algorithm (NRU) is a practical algorithm that makes use of the bits ‘Referenced’ and ‘Modified’. These bits are updated on every memory reference and must be set by the hardware. On every clock cycle the operating system can clear the R bit. This distinguishes those pages that have been referenced most recently from those that have not been referenced during this clock cycle. The combinations are:

(0) not referenced and not modified

(1) not referenced, modified

(2) referenced, not modified

(3) referenced, modified

NRU randomly chooses a page from the lowest class to remove

Page Replacement Algorithms (cont)

Page 18: Memory management

Page Replacement Algorithms (cont)

•First In First Out Algorithm: when a new page must be brought in, replace the page that has been in memory the longest. Seldom used: even though a page has been in memory a long time, it may still be needed frequently.

•Second Chance Algorithm: this is a modification of FIFO. The Referenced bit of the page that has been in memory longest is checked before that page is automatically replaced. If the R bit has been set to 1, that page must have been referenced during the previous clock cycle. That page is placed at the rear of the list and its R bit is reset to zero. A variation of this algorithm, the ‘clock’ algorithm keeps a pointer to the oldest page using a circular list. This saves the time used in the Second Chance Algorithm moving pages in the list

Page 19: Memory management

Page Replacement Algorithms (cont)

•Least Recently Used Algorithm (LRU) - keep track of each memory reference made to each page by some sort of counter or table. Choose a page that has been unused for a long time to be replaced. This requires a great deal of overhead and/or special hardware and is not used in practice. It is simulated by similar algorithms:

•Not Frequently Used - keeps a counter for each page and at each clock interrupt, if the R bit for that page is 1, the counter is incremented. The page with the smallest counter is chosen for replacement. What is the problem with this?

A page with a high counter may have been referenced a lot in one phase of the process, but is no longer used. This page will be overlooked, while another page with a lower counter but still being used is replaced.

Page 20: Memory management

Page Replacement Algorithms (cont)

Aging -a modification of NFU that simulates LRU very well. The counters are shifted right 1 bit before the R bit is added in. Also, the R bit is added to the leftmost rather than the rightmostbit. When a page fault occurs, the page with the lowest counter is still the page chosen to be removed. However, a page that has notbeen referenced for a while will not be chosen. It would have many leading zeros, making its counter value smaller than a pagethat was recently referenced.

Page 21: Memory management

‘Demand Paging’ : When a process is started, NONE of its pages are brought into memory. From the time the CPU tries to fetch the first instruction, a page fault occurs, and this continues until sufficient pages have been brought into memory for the process to run. During any phase of execution a process usually references only a small fraction of its pages. This property is called the ‘locality of reference’.

Demand paging should be transparent to the user, but if the user is aware of the principle, system performance can be improved.

Page Replacement Algorithms (cont)

Page 22: Memory management

How is a page fault actually handled?1. Trap to the operating system ( also called page fault interrupt).

2. Save the user registers and process state; i.e. process goes into waiting state.

3. Determine that the interrupt was a page fault.

4. Check that the page reference was legal and, if so, determine the location of the page on the disk.

5. Issue a read from the disk to a free frame and wait in a queue for this device until the read request is serviced. After the device seek completes, the disk controller begins the transfer of the page to the frame.

6. While waiting, allocate the CPU to some other user.

7. Interrupt from the disk occurs when the I/O is complete. Must determine that the interrupt was from the disk.

8. Correct the page table /other tables to show that the desired page is now in memory.

9. Take process out of waiting queue and put in ready queue to wait for the CPU again.

10. Restore the user registers, process state and new page table, then resume the interrupted instruction.

Page 23: Memory management

SEGMENTATION

Where paging uses one continuous sequence of all virtual addresses from 0 to the maximum needed for the process.

Segmentation is an alternate scheme that uses multiple separate address spaces for various segments of a program.

A segment is a logical entity of which the programmer is aware. Examples include a procedure, an array, a stack, etc.

Segmentation allows each segment to have different lengths and to change during execution.

Page 24: Memory management

Without Segmentation a Problem May Develop

Page 25: Memory management

Segmentation

• Allows each table to grow or shrink, independently

• To specify an address in segmented memory, the program must supply a two-part address: (n,w) where n is the segment number and w is the address within the segment, starting at 0 in each segment.

• Changing the size of 1 procedure, does not require changing the starting address of any other procedure - a great time saver.

Page 26: Memory management

Segmentation Permits Sharing Procedures or Data between Several Processes

•A common example is the shared library, such as a large graphical library compiled into nearly every program on today’s modern workstations.

•With segmentation, the library can be put in a segment and shared by multiple processes, avoiding the need to have the entire library in every process’s address space.

•Since each segment contains a specific logical entity, the user can protect each appropriately (without concern for where boundaries are in the paging system): a procedure segment can be set execute but not read or write; an array can be specified read/write but not execute; etc. This is a great help in debugging.

Page 27: Memory management

Comparison of paging and segmentation

Page 28: Memory management

Pure Segmentation

(a) Memory initially containing 5 segments of various sizes.(b)-(d) Memory after various replacements: external fragmentation

(checkerboarding) develops.(e) Removal of external fragmentation by compaction eliminates the

wasted memory in holes.

Page 29: Memory management

Segmentation with Paging: MULTICS (1)

• Descriptor segment points to page tables• Segment descriptor – numbers are field lengths

Page 30: Memory management

Segmentation with Paging: MULTICS (2)

A 34-bit MULTICS virtual address

Page 31: Memory management

Segmentation with Paging: MULTICS (3)

Conversion of a 2-part MULTICS address into a main memory address

Page 32: Memory management

Segmentation with Paging: Pentium (1)

A Pentium selector

Page 33: Memory management

Segmentation with Paging: Pentium (3)

Conversion of a (selector, offset) pair to a linear address

Page 34: Memory management

Segmentation with Paging: Pentium (4)

Mapping of a linear address onto a physical address

Page 35: Memory management

Segmentation with Paging: Pentium (5)

Protection on the Pentium

Level


Recommended