+ All Categories
Home > Documents > 3.3.1 the Function of Operating Systems

3.3.1 the Function of Operating Systems

Date post: 28-Apr-2017
Category:
Upload: brett-ballard
View: 222 times
Download: 1 times
Share this document with a friend
23
Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012 3.3.1 The function of operating systems TYPES OF OPERATING SYSTEM 1 BATCH 1 REAL TIME 1 SINGLE USER 1 MULTI-USER 1 SINGLE TASKING 1 MULTI-TASKING 2 MULTI-PROGRAMMING 2 DISTRIBUTED SYSTEMS 2 INTERRUPTS 3 CONDITIONS THAT MAY GENERATE INTERRUPTS 3 PRIORITIES 4 INTERRUPT MASKS 4 MEMORY MANAGEMENT 5 PAGING 5 SEGMENTATION 6 VIRTUAL MEMORY 7 DEVICE I/O 8 DATA TRANSFER 8 SIGNALLING 9 POLLING 9 INTERRUPTS 9 BUFFERING I/O 10 I/O CACHE 11 LOGICAL AND PHYSICAL DEVICES 11 MEMORY MAPPED I/O 12 MEMORY MANAGEMENT UNIT (MMU) 12 ACCOUNTING AND SECURITY 13 PROCEDURES ON MULTI-USER NETWORKS 13 ENCRYPTION 13 ACTIVITY LOGS 14 FILE MANAGEMENT 15
Transcript
Page 1: 3.3.1 the Function of Operating Systems

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

3.3.1 The function of operating systems

TYPES OF OPERATING SYSTEM 1

BATCH 1

REAL TIME 1

SINGLE USER 1

MULTI-USER 1

SINGLE TASKING 1

MULTI-TASKING 2

MULTI-PROGRAMMING 2

DISTRIBUTED SYSTEMS 2

INTERRUPTS 3

CONDITIONS THAT MAY GENERATE INTERRUPTS 3

PRIORITIES 4

INTERRUPT MASKS 4

MEMORY MANAGEMENT 5

PAGING 5

SEGMENTATION 6

VIRTUAL MEMORY 7

DEVICE I/O 8

DATA TRANSFER 8

SIGNALLING 9

POLLING 9

INTERRUPTS 9

BUFFERING I/O 10

I/O CACHE 11

LOGICAL AND PHYSICAL DEVICES 11

MEMORY MAPPED I/O 12

MEMORY MANAGEMENT UNIT (MMU) 12

ACCOUNTING AND SECURITY 13

PROCEDURES ON MULTI-USER NETWORKS 13

ENCRYPTION 13

ACTIVITY LOGS 14

FILE MANAGEMENT 15

Page 2: 3.3.1 the Function of Operating Systems

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

FILE MANIPULATION 15

ARCHIVING 15

DATA COMPRESSION 15

FILE SYSTEMS, FILE ALLOCATION TABLE – FAT 16

FRAGMENTATION 17

DELETION OF FILES 17

BOOT FILES 17

SCHEDULING 18

PROCESS 18

PROCESS CONTROL BLOCK 20

SCHEDULING 20

Page 3: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 1

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

Types of Operating System The mode of an operating system determines how it handles programs, processes and users. Traditionally the

O/S would operate in one mode only, but there has been a gradual blurring of this due to the increase in CPU

power and decrease in storage (primary and secondary) costs.

Batch

Data processing is performed with little or no interaction with user.

Jobs are submitted and run until complete, next job can then be processed.

User needs to specify job and any parameters.

Uses: payroll, bank statements, or any large scale high volume repetitive data processing job, 3D scene

rendering, video/audio encoding, data analysis (Finite Element analysis, simulation etc…)

Real time

Inputs are processed and appropriate actions are taken immediately with appropriate outputs.

Often found in safety critical systems and systems where quick feedback essential

Where immediate response is require for the experience of the user.

Uses: Fly-by-Wire plane systems, Power plants, Games

Single User

Allows one user to access the O/S and hence the facilitates, software/devices of the host.

This may be the ability to run more than one program at once (multi-tasking) or just a single program at

once (single-tasking)

Uses: Desktop/Laptop machine

Multi-user

The ability to allow multiple users access the O/S and hence the facilities software/devices on the host

computer.

Often users can run multiple programs/applications by multi-tasking or multi-programming.

Uses: Mini-computer, Mainframe, large scale network with central application servers.

Single tasking

The ability to run only one program/application at a time.

This is the way most computers used to be, is still found in embedded systems.

Uses: Domestic appliances – video/DVD player/ washing machine etc…

Page 4: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 2

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

Multi-tasking

The ability to switch between a number of programs/processes giving the illusion that they are running

simultaneously, while in fact they are given a slice of time to run.

O/S needs to schedule the running of the processes.

Uses: Desktop/Laptop machines also central processing servers on networks.

Multi-programming

Is the ability to physically run more than one program or process at the same time.

It requires more than one physical core (Processor).

Often found on servers (mini and mainframes), allowing more processing to be carried out in a shorter

space of time.

This will often be coupled with multi-user ability.

Uses: for large scale data processing application, simulation, corporate database

Distributed systems A system where the processing is farmed out to several machines connected across some form of LAN or

WAN (local or wide area network).

Data is split up into jobs (by some central controller) to be processed individually by machines on the

network.

Once a job is completed its results are sent back to the central controller to be collated.

The O/S needs to be able to handle these remote connections and manage the data being sent out and results

returned. It can sometimes involve hundreds of thousands of workstations/servers.

Uses: Very large scale data processing projects that would generally take thousands of computer years to

process on a single system.

Uses: 3D graphics rendering for films and television (render farms), Protein folding simulation

(folding@home), SETI (Search for ExtraTerrestial Intelligence) program analysing radio signals from space,

simulations.

Page 5: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 3

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

Interrupts Interrupts are signals generated by devices or software that are used to seek the attention of the O/S.

Once received the O/S will pass control to a program called an Interrupt handler. This program (often inside a

device handler) will know how to handle the task the interrupt.

E.g.

User presses CTRL + ALT + DEL

The O/S will stop running the current process (see scheduling)

Home screen for Windows will be displayed

Once finished the O/S will restart whatever process was interrupted (hence the name)

Conditions that may generate interrupts Interrupts can be generated from several sources

I/O Devices that need attention

User Interrupt (CTRL+ALT+DEL on a PC)

O/S interrupts (time slice finished)

Software requesting attention (a device or O/S function)

Hardware faults

O/S faults (kernel faults will be catastrophic – blue screen of death)

Run-time errors – generate by software, programmers can generate their own exceptions.

Interrupts generated by software are often referred to as exceptions.

Page 6: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 4

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

Priorities If all interrupts were given the same priority then they would keep interrupting each other. A time

independent I/O request (such as a file save operation) could potentially interrupt the scheduling of a process

(this would be bad).

In order to deal with this interrupts need to be allocated a priority (which is essentially just a number)

This allows an interrupt to interrupt another if it has a higher priority.

Example

Interrupt level 3 occurs, O/S starts to process

Interrupt level 2 occurs, O/S ignores the interrupt

Interrupt level 5 occurs, O/S pauses processing of level 3 interrupt and process level 5 one

When level 5 one finishes level 3 interrupt processing continues

Interrupt masks Sometimes when processing interrupts a programmer may not want to be interrupted by other interrupts.

When writing interrupt handlers often the first line of code disables interrupts below a certain priority,

allowing them to continue when the interrupt handler has finished. This is achieved by setting and clearing

the Interrupt register of the CPU.

How this works exactly depends on the CPU involved, but it involves the setting and clearing of bits, which is

why setting the allowable interrupt level is known as masking. The O/S uses this when deciding what

interrupts to ignore and which to process.

Generally all interrupts can be blocked except critical O/S interrupts (such as scheduler); these are known as

NMI or Non-Maskable Interrupts (as there is no way of masking their processing).

Page 7: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 5

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

Memory management Memory is a physical Device, with memory locations organised in a linear fashion. Each memory location is

accessed by using a memory address.

Memory addresses start at 0 and continue up to the maximum amount of RAM available.

Memory management in a multi-tasking environment is an important issue for OS designers. The OS must be

responsible for controlling which parts of main memory processes can access. (Main memory is the RAM of a

computer)

This is only possible with special registers and functions built into modern CPU’s; a software only solution to

memory management is not possible.

The OS must ensure that processes within the system do not

Access the program code of another process

Access the data belonging to another process

Overwrite the data or code of another process

Directly access any part of the OS

Paging Paging is one way of tackling the memory protection problem.

Main memory is split into a number of physical pages, each occupying a fixed number of bytes (typical page

sizes from 256 bytes up to 64KB).

The OS numbers these pages from 0 onwards (0 being the page occupying the lowest part of main memory).

The OS maintains a list of all the free pages in main memory

When a process is loaded by the OS, pages from the free list are allocated

The OS maintains a list of the pages occupied by code and data of processes within the system

A list of pages used by each individual process is maintained in each process control block

If a process attempts to access a page which it does not own then the OS can take action against the process

(Killing or disabling the process, letting the operator know what has happened through some form of dialog)

This is made possible thanks to relative addressing. This allows the OS to load program code and data

anywhere in main memory. Program or data doesn’t have to reside in consecutive pages; the OS invisibly

controls access to code and data within separate pages. The application does not need to be aware of where

it is physically stored the memory management unit makes all the address translations transparently.

Page 8: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 6

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

Segmentation Memory is split into logical divisions known as segments.

These segments are of variable size (unlike pages which are fixed) to accommodate whatever is being stored.

Often Applications have multiple segments allocated to them for different types of data, these may include

the following segments:

Text segment: Executable code (machine code of the application)

Data segment: the already initialised data for the application

BSS segment: uninitialised data (space for dynamic data structures)

Because they are variable size managing them is more complex than paged memory system. Each of the

segments of an application will have its own memory protection and management.

Page 9: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 7

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

Virtual memory Virtual memory is an extension of paging

Often called Paged Virtual Memory, it allows the OS to run programs and manipulate data files that are larger

than main memory can accommodate. The CPU can only access code and data, held in main memory. This

complication must be overcome by virtual memory.

Uses fast (must be to work effectively) secondary storage (disk drive) to hold additional (virtual) pages of

memory

Secondary storage space is a cheaper than main memory

The O/S maintains a list of Logical pages (which the O/S can create as many as it likes)

The logical pages can exist in RAM (occupying physical pages)

Pages stored in secondary storage are referred to as being in the swap file, they are listed in a form of disk

directory (which is used to quickly look up the location of pages)

The OS maintains a record about each page containing information such as

Is this page in main memory or in the swap file

Last time paged was accessed

Number of times this page has been accessed

Is this page accessed straight after a another page

The location of a page is transparent to the process that owns it (it doesn’t need to know where pages are

stored)

If a process tries to access one of its pages stored in the swap file then the OS must load it into main memory

first.

In order to accommodate this page in main memory, a page currently in main memory must be moved into

the swap file. The page record (above) is used to determine which

The OS can decide which pages to move into the swap file in many different ways

Longest time without access

Page not used for the longest time is moved into the swap file

Heuristic approach

The OS monitors how pages are being accessed by processes

It predicts when certain pages will be needed and ensures they are in main memory when needed

When the OS thinks they are not needed they are automatically moved into the swap file (making space for

other pages)

This ‘intelligent’ approach can make virtual memory very effective

Page 10: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 8

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

Device I/O The communication between devices and the CPU is one of the main jobs of the operating system. A device is

any piece of equipment that can communicate with the CPU. We call devices that are external to the main

computer unit peripheral devices.

Communicating with devices involves Data transfer and signalling.

Data Transfer

Sending data to a device

Receiving data from a device

Data Transfer between memory and devices is achieved in two main ways:

CPU responsible for the data transfer (Happens on simple computer systems)

Input/Output Channel responsible for the data transfer. (Modern computers)

Data block info (address, number of bytes) passed by OS to I/O channel

I/O channel acts independently of the CPU

OS can get on with the job of running process/tasks using the CPU

When transfer complete I/O channel signals the OS to let in know

Page 11: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 9

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

Signalling

Communications between the Operating system and devices (physical or logical devices)

Sending messages to a device telling it to do something

Receiving messages from a device about its status

Signalling achieved by two methods appropriate in different situations

Polling

Used when status of a device is wanted at a regular interval (e.g. mouse, keyboard, gamepad)

Only suitable if time interval is large (milliseconds)

Polling rate dependant on device, i.e. mouse is often polled very quickly (200Hz) so smooth movement as

used for primary interaction with Operating System.

OS responsible for asking devices what their status is

Device sends immediate response of its status to the OS

Too much polling can halt an OS, leaving it no time to run programs

Interrupts

Are used when the status of a device can change at random intervals

Removes burden from OS - Device responsible for generating interrupt.

OS stops what it’s doing and runs an interrupt handler

Interrupt handler must execute quickly so OS can use the CPU

Interrupts have priorities: Low for fast devices – as they will be able to get on quickly once they have got

their response

High for slow devices (e.g. printers) – as they need a response quickly so they can continue doing their

jobs

Page 12: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 10

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

Buffering I/O A buffer is a temporary area of RAM, which is used to hold data waiting to be processed. If buffers were not

used then data might be lost from the device, if the O/S was not ready to receive it.

Buffers act as a link between devices which operate at different speeds, e.g. the CPU and the keyboard.

Buffers can be read (emptied) and written (filled) but only one action can be carried out at a time.

When the buffer is full the O/S can be signalled (as it was probably off doing something else) so that it can

process the data in the buffer. The device will then have to wait until the buffer has been processed before it

can continue

Double buffers

Are a way of speeding up reading and writing to buffers by allowing simultaneous reads and writes. It enables

the operating system to keep the slower device working as often as possible.

The process is as follows (the read/write can obviously be performed the other way round):

Device fills Buffer One

O/S can now process Buffer One

Device can now fill Buffer Two, while Buffer One is busy (being emptied)

Used on graphics cards so 3d scenes can be rendered while the previous one is displayed (sometimes triple

buffers are used to keep the slower device even busier)

Filling buffer

Device

O/SEmpty Buffer

O/SEmpty Buffer One

While Device fills

buffer TwoFills buffer One

Device

Then Fills buffer Two

Single Buffer – can only read or write (not at same time)

Double Buffer – can read and write (different buffers)

Keeps slower device working most of the time

Page 13: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 11

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

I/O Cache Some devices will carry out a high number of data transfers, e.g. disk drives. This often involves some

mechanical process, which is inherently slow.

One way of reducing the amount of physical reads and writes is to introduce a cache system. This works in a

similar way as CPU cache, which is used to reduce the number of reads and writes to RAM (which is slower

than Cache RAM).

A dedicated buffer in either RAM or physically connected to the device is used to hold the most recent or

often used data requested for the device.

When reading from the device, if the data is in the cache then this can be quickly retrieved without the

need to access the device.

When writing to the device, if the data is in the cache then this is updated in the cache but must also be

updated on the device.

Writing is the trade-off with cache; reading is where we get the true speed up.

The actual mechanisms used are dependent on the device and its corresponding driver, and there are several

ways to maximise the performance of the cache this is down to the manufacturer to sort out.

Logical and Physical devices A Device handler (Device driver) knows how to communicate with a particular device. The physical hardware

is known as Physical device.

OS sends instructions to the device handler, which acts as an interpreter between the OS and a particular

device.

The physical devices that a device handler communicates with appear to the O/S as a logical device.

For every physical device there should be a corresponding logical device.

The device handler must be able to perform all the requests that the O/S may pass to it. The O/S does not

know how the device handler performs these requests. This allows a Device handler to work with devices that

do and don’t actually exist.

This is what daemon tools takes advantage of in order to mount CD/DVD images (which are just files) and

make them appear as logical drives. Which act just like a real CD.

Logical devices can also be used to create RAM disks, which effectively use a large buffer in RAM to pretend to

be a hard disk. This will appear in My Computer as another Hard Disk, the device driver must be able to

perform (or at least handle) any request that a normal hard disk may be asked to perform (including format!).

The examples given here all related to disk devices, but you could implement any type of logical device.

Page 14: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 12

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

MEMORY MAPPED I/O Is a method to control and communication with devices that share the same Address and Data buses as the

CPU.

A nice and simple way of connecting more than one device to the Address and Data buses made possible by

the MMU.

Memory addresses used to refer to registers on device, often by using one memory address to specify the

register to select and another to specify either the data to send to the register or the data read from a register

on the device.

Memory Management Unit (MMU) Responsible for Mapping Memory addresses to RAM and other devices

Takes addresses specified by the MAR and determines whether the location resides in RAM or whether the

Address maps to another device connected to the Address and Data Buses.

Set blocks of Memory addresses are used to map to locations other than RAM.

These memory addresses usually reside at the very end of the memory addressing range.

MEMORY ADDRESS RANGE MAPPING ASSIGNMENT

0x00000000 0x0000 in BIOS (for booting purposes)

0x00000001 to 0x00000FFF Hardware vectors

0x00001000 to 0x0FFFFFFF RAM

0xF0000000 to 0xF0FFFFFF Graphics RAM on graphics card

0xFF000000 to 0xFF0FFFFF Sound Card RAM

0xFFF00000 to 0xFFF0FFFF I/O buffers

0xFFFF0000 to 0xFFFFFFFF 0x0000 to 0xFFFF of Graphics BIOS ROM

FFFFFC02 maps a register inside the keyboard processor of an ATARI ST

Memory mapped I/0 can be used to do the following operations

Reading & writing registers within a control processor

Reading & writing data I/O ports

Reading memory locations inside ROM chips

Executing code inside ROM chips (passing the address to the PC)

Allowing access to more RAM/ROM through Bank switching (done on Atari VCS, NES, SNES, MegaDrive

etc…)

Page 15: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 13

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

Accounting and security An important task in a multi-user O/S is to keep track of who can do what and who did what.

In order to do this all users need accounts that define the permissions allowed for that user. The actual

permissions possible are dependent on the particular O/S but general sets include those for:

Administrators who have access to all O/S features and facilities

Users who have limited access to features and facilities

Security systems work well and O/S security system are very complex, they are let down by users not

following procedures.

Procedures on multi-user networks In order to minimise threats caused by user carelessness, policies need to put in place by administrators on

the network:

Encryption and copying policies

Password policies

Acceptable use policies

Program installation blocking for users

Download blocking policies (exe, zip, scripts etc…)

Virus scanning of attachments

E-mail spam filtering

USB device policies

Plus many others.

Encryption A method of coding data so it cannot be read by an unauthorised third party, this process is reversible.

A private key is used to control an algorithm to encrypt the data.

This data can be de-crypted by supplying the key.

Many methods of encryption exist, keys need generally to be at least 128 bits long so that they cannot be

brute forced (broken by simple methods) or broken by examining similar encrypted data.

Some things are encrypted using one way encryption, so they can never be re-read. This is mainly used for

passwords, pin numbers etc…

Pin encrypted and held by bank, when you enter your pin this is encrypted and this is compared with

the encrypted version stored at the bank

Page 16: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 14

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

Public key encryption

This is the use of a secret encoding system to prevent hackers from reading data transmitted down a

communications line. The sender uses software to encrypt the data (translate it into the secret code) and

software at the receiving end decrypts it (translates it back to normal text).

Encryption is often used for highly sensitive financial, legal, military and security-related information. The EFT

(‘electronic funds transfer’) system, by means of which banks transfer large amounts of money in electronic

form, is protected by the most advanced encryption techniques.

The ENCRYPTION KEY is the method of translating the message into the code. The DECRYPTION key reverses

this process to return you to the original message.

One problem with encryption is this: how do you tell someone what your decryption key is without it being

intercepted? (You can’t send it in encrypted form, as the recipient would already need to know the decryption

key in order to understand it!)

This problem was solved in the 1970’s with the invention of PUBLIC KEY ENCRYPTION. The essential features

of this system are:

1. Each individual or organisation has a decryption key, which is used to decipher any messages sent to

them. This decryption key is private -- known only to them.

2. However, the corresponding encryption key for each individual or organisation will be generally known.

3. The complexity of the encryption key is such that it is not possible in practice to work out the decryption

key from it.

Activity logs Most operating systems the O/S can log in detail many actions performed by users, such as:

What programs have been run by user

What files have been browsed, read, written

How much time spent running programs

What activities have been done with those programs (particularly used with web browsers)

Page 17: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 15

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

File management

File manipulation In order to manage file access, you have to have user accounts. Once setup you can give permissions for users

to access and manipulate files in the standard ways.

File and Folder

Create

Read

Delete

Copy

Archiving

Hiding

You can also allow users to view/share have access to certain folders and block others. This is particularly

important on web servers; guest users need browse access to folders with web pages on them.

You may do some of these tasks on your home network to allow a printer to be shared by all the computers

on your network or to allow you to copy/read files from your central computer.

Archiving Will generally compress files that have not been used for a certain period of time, this can free up disk space.

This is not the same as making a backup. They will still be accessible but it will need to be decompressed as

you open it.

Data Compression This is a technique for enabling the data in a file to be represented using a smaller number of bits, thus

reducing storage requirements as well as data transmission times. One method involves taking frequently

occurring combinations of characters (e.g. ‘th’ and ‘ng’ in English) and replacing them with single character

codes. Another technique, applicable to graphics files, involves representing groups of pixels that are all the

same colour by a pair of numbers, one giving the colour code and the other giving the number of pixels in the

group (as opposed to repeating the colour code for each individual pixel).

Page 18: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 16

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

File systems, File Allocation Table – FAT The O/S needs to physically store files on the secondary storage device. In order to efficiently access these

files it needs to know about the physical locations. It does this using a File Allocation Table or FAT. The File

allocation system needs to implement security features so files are only accessible by authorised users.

The surface of a disk is split into clusters which allow the storage of a fixed number of bytes. Typical cluster

sizes range from 512bytes up 32Kbytes. The actual size of the clusters depends on 2 things:

Size of the storage device

File system in use (NTFS, FAT, FAT32 etc...)

The File Allocation table is essentially a linked list which identifies the physical clusters used by all the files on

the disk. Each cluster needs a unique address for the FAT. Different FAT systems use a different size integer to

hold the cluster ID.

FAT - FAT(16)

Standard FAT (FAT16) uses a 16 bit unsigned number to identify the clusters, this means you can address

65536 different clusters on the disk.

For a 1GB disk this means that 1GB/65536 = 16384 or 16KB cluster size

FAT16 can have maximum cluster size of 32KB which means it cannot access disk above 2GB.

FAT32

Uses a 32 bit unsigned number to identify clusters (only 28 bits are used tho’) which allows a theoretical drive

limit of 8TB (Terra bytes). Microsoft limit this internally to 32GB. So for disk sizes larger than this you need to

use NTFS, which uses a separate type of technology making use of tree structures organised around individual

clusters.

Clusters and Files

Whatever file system we use we try to keep the cluster size a small as possible so that we do not waste

physical drive space.

Example:

If we have a cluster size of 32KB and a file of size 33KB we will require 2 clusters to store this file and that

would mean we waste about 15KB of space (clusters cannot be subdivided). So the physical space occupied by

the file is 33KB but the logical space is actually 64KB. If these sort of files are stored a lot you could potentially

only be using half the physical space on the drive.

In the above example if we had clusters of size 4KB, we would need 9 clusters to store the file, which would

mean we only wasted approximately 3KB of space. The trade of is in choosing an appropriate file system.

It is the job of the File system to make sure we make efficient use of available physical space.

The FAT is usually stored in multiple locations on the disk surface. This is known as redundancy and allows

recovery of files in case a portion of the disk surface is damaged.

Page 19: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 17

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

Fragmentation In this example FAT we have 4 files stored A, B, C and D

Cluster# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

File A A B B A B A B C C C A D D D

Files C and D are in consecutive clusters, when they are accessed this will be quick as there will not need to be

much mechanical movement of the disk drive (this is relatively slow)

Files A and B are stored in non-consecutive clusters, they are fragmented. There will be a need to move the

disk drive heads or wait for the surface to spin to the correct position under the heads.

As more files are added the fragmentation will get worse as less and less available clusters will be in

consecutive locations.

A defragmenter will try to re-organise the clusters allocated so that files appear in consecutive clusters. This

takes ages.

Deletion of files Files do not get physically deleted; the cluster allocations in the FAT are added to the list of free clusters ready

to be used again. This is how we can recover data from disks even after they have been formatted. The only

way to prevent recovery is to physically write data over unused clusters.

Boot files Boot files are a way of getting the operating system loaded.

There are various levels of boot files:

The Master boot record

The master boot record is automatically loaded by the BIOS (Basic Input Output system). This specifies basic

information about the operating system to load such as location of O/S boot files.

Various storage devices can hold Master boot records such as CD’s this is how the windows gets installed. If

set to do so the BIOS can be asked to locate master boot records on CD/DVD, USB and across network cards.

This allows flexible O/S loading as well as multi-O/S booting.

The following boot files are specific to the older Windows (pre XP) platforms but similar files exist for other

O/S’s.

Boot.ini Specifies multi O/S boot options (by default this does not force anything to start-up). If

you have installed multi O/S’s this file will be modified to bring up a boot choice so you

can start the desired O/S.

Win.ini Contains information about windows settings at boot up, such as desktop settings

AutoExec.Bat Contains information about applications to run at start-up

Config.sys Contains driver install information and file buffering settings

Page 20: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 18

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

Scheduling One of the core functions of the O/S is the management and running of programs and applications.

Process A Process is a program that requires CPU time in order to fulfil its goals. CPU time is a special case of a

resource, processes need to be allocated CPU time just like they need printers and disk drives.

Task and job are other names that in general mean the same thing as process. Processes can also be

generated by programs themselves, these are called threads and they run independently of the program that

generated them.

On computers with a single CPU, only a single process can execute at any given time.

A modern multi-tasking OS can manage lots of processes, allocating CPU time to each, in order for them to

complete their goals.

Until a process has completed its goals, it is said to be “Alive” or “Live”.

While Live a process can be in one of 3 states.

READY

RUNNING

BLOCKED

Blocked by O/S or by user

I/O device available

or Interrupt (wake up)

or O/S unblocked

Device I/O request

or Sleeping

or O/S Suspended processTime slice completed

Time slice granted

Page 21: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 19

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

Running/Executing

A process has been allocated CPU time and is now being executed.

Ready

A process if given the chance could execute. It is only waiting for the OS to allocate it CPU time.

Blocked/Waiting

Waiting for an I/O device to become available

Waiting for an I/O device request to be completed – e.g. File transfer from disk to memory or data that a

process wants to manipulate

Process has gone to sleep (completed the current work it wants to do for now, will wake up later), will be

woken by an interrupt

A process is waiting for some event to happen, and cannot execute until it has occurred.

Once the event that blocked a process has occurred, the process moves on to the ready state.

READY

RUNNING

BLOCKED

Blocked by O/S or by user

I/O device available

or Interrupt (wake up)

or O/S unblocked

Device I/O request

or Sleeping

or O/S Suspended processTime slice completed

Time slice granted

Page 22: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 20

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

Process Control Block Each process has its state stored in a record that is maintained by the OS known as the Process Control Block.

Lots of things are stored in this; here is a selection of the most common items. (Different operating systems

store different things).

Handle – a reference to the process

Name of a process

Owner of process (OS or user id) – who can control this process

Process state

Allocated CPU time so far

Estimated time to completion (Only really possible with batch processes)

Allocated resources

Memory allocations (see Memory management for more details)

The memory allocations may include:

What pages are occupied by the process code

What pages are occupied by the process data

What pages have been allocated for dynamic data

Scheduling The scheduler is the name given to a part of the OS that decides which process is allocated CPU time (is next

scheduled to use the CPU).

As the scheduler is part of the OS, it also needs CPU time to execute (remember the OS is a program too!).

Whichever method of scheduling is chosen, the scheduler needs to spend as little time as possible deciding

which process goes next (it will be using up CPU time deciding).

How scheduling works

CPU time is broken into slices, known as time slices

Time slice is achieved by means of a clock, which generates an interrupt at periodic (regular) intervals.

This interrupt tells the OS to hand control to the scheduler (the scheduler is the interrupt handler for this

particular interrupt)

Scheduler saves the state of the current process into its process descriptor block

Scheduler decides which process to run next and allows the CPU to execute it

Page 23: 3.3.1 the Function of Operating Systems

3.3.1 Function of Operating Systems page 21

Created by E Barwell Spring 2008 rev Autumn 2009, rev Aug 2012

It is important to allocate priorities to all the processes in a system. So that important processes are allocated

more CPU time and so execute more often, this can be achieved by:

Allocating longer time slices to high priority processes

Allocating more time slices to high priority processes

Having ready queues for different priorities

High priority processes include

Crucial OS tasks (such as screen updates)

I/O intensive processes (when the get I/O resources they get a chance to execute before the OS takes the

resources back from them)

Short running processes (who’s activity will be over in a short time)

Low priority processes include

CPU intensive processes (ones that don’t require any resources other than the CPU time – mathematical or

number crunching processes)

Long running processes (who’s duration may be very long)

Batch processes (a special case of the above, as they will take ages to execute anyway)

Simple scheduling schemes (how the scheduler decides process to run)

Round Robin

Single queue for ready processes, join back of queue when time up

High priority processes allowed to run more than one time slice at a time

Low priority given a single time slice at a time

Shortest job first

The scheduler picks the processes that will take the smallest amount of time to execute. This involves being

able to estimate the run time of a process. This basically gives priority to little jobs

Shortest time remaining

The scheduler looks at how long a process has run, and picks the one which has the shortest amount of time

left to run. This basically gives priority for jobs that have nearly finished so they can be forgotten about.

Priority queues

A queue for each level of priority

Processes waiting in High priority allocated CPU time first, and so on down to the lowest queue


Recommended