+ All Categories
Home > Documents > 98046890 SEM 3 BC0042 1 Operating Systems

98046890 SEM 3 BC0042 1 Operating Systems

Date post: 08-Nov-2014
Category:
Upload: suidhi
View: 11 times
Download: 0 times
Share this document with a friend
Popular Tags:
27
BC0042 – Operating Systems Assignment Set – 1 Q1. What are the services provided by Operating Systems? Explain briefly. Following are the five services provided by operating systems for the convenience of the users. Program Execution The purpose of a computer system is to allow the user to execute programs. So the operating system provides an environment where the user can conveniently run programs. The user does not have to worry about the memory allocation or multitasking or anything. These things are taken care of by the operating systems. Running a program involves the allocating and de-allocating memory, CPU scheduling in case of multi-process. These functions cannot be given to the user-level programs. So user-level programs cannot help the user to run programs independently without the help from operating systems. I/O Operations Each program requires an input and produces output. This involves the use of I/O. The operating systems hides from the user the details of underlying hardware for the I/O. All the users see that the I/O has been performed without any details. So the operating system, by providing I/O, makes it convenient for the users to run programs. For efficiently and protection users cannot control I/O so this service cannot be provided by user-level programs. File System Manipulation The output of a program may need to be written into new files or input taken from some files. The operating system provides this service. The user does not have to worry about secondary storage management. User gives a command for reading or writing to a file and sees his/her task accomplished. Thus operating system makes it easier for user programs to accomplish their task. This service involves secondary storage management. The speed of I/O that depends on secondary storage management is critical to the speed of many programs and hence I think it is best relegated to the operating systems to manage it than giving individual users the control of it. It is not difficult for the user-level programs to provide these
Transcript
Page 1: 98046890 SEM 3 BC0042 1 Operating Systems

BC0042 – Operating Systems

Assignment Set – 1

Q1. What are the services provided by Operating Systems? Explain briefly.

Following are the five services provided by operating systems for the convenience of the users.

Program Execution

The purpose of a computer system is to allow the user to execute programs. So the operating system provides an environment where the user can conveniently run programs. The user does not have to worry about the memory allocation or multitasking or anything. These things are taken care of by the operating systems.

Running a program involves the allocating and de-allocating memory, CPU scheduling in case of multi-process. These functions cannot be given to the user-level programs. So user-level programs cannot help the user to run programs independently without the help from operating systems.

I/O Operations

Each program requires an input and produces output. This involves the use of I/O. The operating systems hides from the user the details of underlying hardware for the I/O. All the users see that the I/O has been performed without any details. So the operating system, by providing I/O, makes it convenient for the users to run programs.

For efficiently and protection users cannot control I/O so this service cannot be provided by user-level programs.

File System Manipulation

The output of a program may need to be written into new files or input taken from some files. The operating system provides this service. The user does not have to worry about secondary storage management. User gives a command for reading or writing to a file and sees his/her task accomplished. Thus operating system makes it easier for user programs to accomplish their task.

This service involves secondary storage management. The speed of I/O that depends on secondary storage management is critical to the speed of many programs and hence I think it is best relegated to the operating systems to manage it than giving individual users the control of it. It is not difficult for the user-level programs to provide these

Page 2: 98046890 SEM 3 BC0042 1 Operating Systems

services but for above mentioned reasons it is best if this service is left with operating system.

Communications

There are instances where processes need to communicate with each other to exchange information. It may be between processes running on the same computer or running on the different computers. By providing this service the operating system relieves the user from the worry of passing messages between processes. In case where the messages need to be passed to processes on the other computers through a network, it can be done by the user programs. The user program may be customized to the specifications of the hardware through which the message transits and provides the service interface to the operating system.

Error Detection

An error in one part of the system may cause malfunctioning of the complete system. To avoid such a situation the operating system constantly monitors the system for detecting the errors. This relieves the user from the worry of errors propagating to various part of the system and causing malfunctioning.

This service cannot be allowed to be handled by user programs because it involves monitoring and in cases altering area of memory or de-allocation of memory for a faulty process, or may be relinquishing the CPU of a process that goes into an infinite loop. These tasks are too critical to be handed over to the user programs. A user program if given these privileges can interfere with the correct (normal) operation of the operating systems.

Page 3: 98046890 SEM 3 BC0042 1 Operating Systems

Q2. What is Micro-kernel? What are the benefits of Micro-kernel?

Micro-kernels

We have already seen that as UNIX expanded, the kernel became large and difficult to manage. In the mid-1980s, researches at Carnegie Mellon University developed an operating system called Mach that modularized the kernel using the microkernel approach. This method structures the operating system by removing all nonessential components from the kernel and implementing then as system and user-level programs. The result is a smaller kernel. There is little consensus regarding which services should remain in the kernel and which should be implemented in user space. Typically, however, micro-kernels provide minimal process and memory management, in addition to a communication facility.

Device

Drivers File Server Client Process …. Virtual Memory

Microkernel Hardware

Fig. 2.3: Microkernel Architecture

The main function of the microkernel is to provide a communication facility between the client program and the various services that are also running in user space. Communication is provided by message passing. For example, if the client program and service never interact directly. Rather, they communicate indirectly by exchanging messages with the microkernel.

On benefit of the microkernel approach is ease of extending the operating system. All new services are added to user space and consequently do not require modification of the kernel. When the kernel does have to be modified, the changes tend to be fewer, because the microkernel is a smaller kernel. The resulting operating system is easier to port from one hardware design to another. The microkernel also provided more security and reliability, since most services are running as user – rather than kernel – processes, if a service fails the rest of the operating system remains untouched.

Several contemporary operating systems have used the microkernel approach. Tru64 UNIX (formerly Digital UNIX provides a UNIX interface to the user, but it is implemented with a

Page 4: 98046890 SEM 3 BC0042 1 Operating Systems

March kernel. The March kernel maps UNIX system calls into messages to the appropriate user-level services.

The following figure shows the UNIX operating system architecture. At the center is hardware, covered by kernel. Above that are the UNIX utilities, and command interface, such as shell (sh), etc.

Page 5: 98046890 SEM 3 BC0042 1 Operating Systems

Q3. Draw the diagram for Unix Kernel components and explain about each components briefly.

UNIX kernel Components

The UNIX kernel has components as depicted in the figure 2.5 bellow. The figure is divided in to three modes: user mode, kernel mode, and hardware. The user mode contains user programs which can access the services of the kernel components using system call interface.

The kernel mode has four major components: system calls, file subsystem, process control subsystem, and hardware control. The system calls are interface between user programs and file and process control subsystems. The file subsystem is responsible for file and I/O management through device drivers.

The process control subsystem contains scheduler, Inter-process communication and memory management. Finally the hardware control is the interface between these two subsystems and hardware.

Fig. 2.5: Unix kernel components

Another example is QNX. QNX is a real-time operating system that is also based on the microkernel design. The QNX microkernel provides services for message passing and process scheduling. It also handled low-level network communication and hardware interrupts. All other services in QNX are provided by standard processes that run outside the kernel in user mode.

Page 6: 98046890 SEM 3 BC0042 1 Operating Systems

Unfortunately, microkernels can suffer from performance decreases due to increased system function overhead. Consider the history of Windows NT. The first release had a layered microkernels organization. However, this version delivered low performance compared with that of Windows 95. Windows NT 4.0 partially redressed the performance problem by moving layers from user space to kernel space and integrating them more closely. By the time Windows XP was designed, its architecture was more monolithic than microkernel.

Page 7: 98046890 SEM 3 BC0042 1 Operating Systems

Q4. Explain seven state process models used for OS with necessary diagram.

Seven State Process Model

The following figure 3.2 shows the seven state process model in which uses above described swapping technique.

Apart from the transitions we have seen in five states model, following are the new transitions which occur in the above seven state model.

• Blocked to Blocked / Suspend: If there are now ready processes in the main memory, at least one blocked process is swapped out to make room for another process that is not blocked.

• Blocked / Suspend to Blocked: If a process is terminated making space in the main memory, and if there is any high priority process which is blocked but suspended, anticipating that it will become free very soon, the process is brought in to the main memory.

• Blocked / Suspend to Ready / Suspend: A process is moved from Blocked / Suspend to Ready / Suspend, if the event occurs on which the process was waiting, as there is no space in the main memory.

• Ready / Suspend to Ready: If there are no ready processes in the main memory, operating system has to bring one in main memory to continue the execution. Some times this transition takes place even there are ready processes in main memory but having lower priority than one of the processes in Ready / Suspend state. So the high priority process is brought in the main memory.

Page 8: 98046890 SEM 3 BC0042 1 Operating Systems

• Ready to Ready / Suspend: Normally the blocked processes are suspended by the operating system but sometimes to make large block free, a ready process may be suspended. In this case normally the low priority processes are suspended.

• New to Ready / Suspend: When a new process is created, it should be added to the Ready state. But some times sufficient memory may not be available to allocate to the newly created process. In this case, the new process is sifted to Ready / Suspend.

Page 9: 98046890 SEM 3 BC0042 1 Operating Systems

Q5. Define process and threads and differentiate between them.

What is a Process?

The notion of process is central to the understanding of operating systems. The term process is used somewhat interchangeably with ‘task’ or ‘job’. There are quite a few definitions presented in the literature, for instance• A program in Execution.• An asynchronous activity.• The entity to which processors are assigned.• The ‘dispatchable’ unit.

And many more, but the definition “Program in Execution” seem to be most frequently used. And this is a concept we will use in the present study of operating systems.

Now that we agreed upon the definition of process, the question is, what is the relation between process and program, or is it same with different name or when the process is sleeping (not executing) it is called program and when it is executing becomes process.

Well, to be very precise. Process is not the same as program. A process is more than a program code. A process is an ‘active’ entity as oppose to program which considered being a ‘passive’ entity. As we all know that a program is an algorithm expressed in some programming language. Being a passive, a program is only a part of process. Process, on the other hand, includes:

• Current value of Program Counter (PC)• Contents of the processors registers• Value of the variables• The process stack, which typically contains temporary data such as subroutine parameter, return address, and temporary variables.• A data section that contains global variables.• A process is the unit of work in a system.

In Process model, all software on the computer is organized into a number of sequential processes. A process includes PC, registers, and variables. Conceptually, each process has its own virtual CPU. In reality, the CPU switches back and forth among processes.

Threads

A thread is a single sequence stream within in a process. Because threads have some of the properties of processes, they are sometimes called lightweight processes. In a process, threads allow multiple executions of streams. In many respect, threads are popular way to improve application through parallelism. The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in parallel. Like a traditional process i.e., process with one thread, a thread can be in any of several states (Running, Blocked, Ready or Terminated). Each thread has its own stack. Since

Page 10: 98046890 SEM 3 BC0042 1 Operating Systems

thread will generally call different procedures and thus a different execution history. This is why thread needs its own stack. An operating system that has thread facility, the basic unit of CPU utilization is a thread. A thread has or consists of a program counter (PC), a register set, and a stack space. Threads are not independent of one other like processes as a result threads shares with other threads their code section, data section, OS resources also known as task, such as open files and signals.

Processes Vs Threads

As we mentioned earlier that in many respect threads operate in the same way as that of processes. Some of the similarities and differences are:

Similarities

• Like processes threads share CPU and only one thread is running at a time.• Like processes, threads within processes execute sequentially.• Like processes, thread can create children.• And like process, if one thread is blocked, another thread can run.

Differences

• Unlike processes, threads are not independent of one another.• Unlike processes, all threads can access every address in the task .• Unlike processes, threads are designed to assist one other. (Processes might or

might not assist one another because processes may originate from different users.)

Why Threads?

Following are some reasons why we use threads in designing operating systems.

1. A process with multiple threads makes a great server for example printer server.2. Because threads can share common data, they do not need to use interprocess

communication.3. Because of the very nature, threads can take advantage of multiprocessors.

Threads are cheap in the sense that

1. They only need a stack and storage for registers therefore, threads are cheap to create.

2. Threads use very little resources of an operating system in which they are working. That is, threads do not need new address space, global data, program code or operating system resources.

3. Context switching is fast when working with threads. The reason is that we only have to save and/or restore PC, SP and registers.

Page 11: 98046890 SEM 3 BC0042 1 Operating Systems

Advantages of Threads over Multiple Processes

• Context Switching Threads are very inexpensive to create and destroy, and they are inexpensive to represent. For example, they require space to store, the PC, the SP, and the general-purpose registers, but they do not require space to share memory information, Information about open files of I/O devices in use, etc. With so little context, it is much faster to switch between threads. In other words, it is relatively easier for a context switch using threads.

• Sharing Treads allow the sharing of a lot resources that cannot be shared in process, for example, sharing code section, data section, Operating System resources like open file etc.

A proxy server satisfying the requests for a number of computers on a LAN would be benefited by a multi-threaded process. In general, any program that has to do more than one task at a time could benefit from multitasking. For example, a program that reads input, process it, and outputs could have three threads, one for each task.

Disadvantages of Threads over Multiple Processes

• Blocking: The major disadvantage if that if the kernel is single threaded, a system call of one thread will block the whole process and CPU may be idle during the blocking period.

• Security: Since there is, an extensive sharing among threads there is a potential problem of security. It is quite possible that one thread over writes the stack of another thread (or damaged shared data) although it is very unlikely since threads are meant to cooperate on a single task.

Any sequential process that cannot be divided into parallel task will not benefit from thread, as they would block until the previous one completes. For example, a program that displays the time of the day would not benefit from multiple threads.

Page 12: 98046890 SEM 3 BC0042 1 Operating Systems

Q6. What is a virtual memory? What are its significance in Operating system.?

Virtual memory is a common part of most operating systems on desktop computers. It has become so common because it provides a big benefit for users at a very low cost.

Most computers today have something like 64 or 128 megabytes of RAM (random-access memory) available for use by the CPU (central processing unit). Often, that amount of RAM is not enough to run all of the programs that most users expect to run at once. For example, if you load the Windows operating system, an e-mail program, a Web browser and word processor into RAM simultaneously, 64 megabytes is not enough to hold it all. If there were no such thing as virtual memory, your computer would have to say, "Sorry, you cannot load any more applications. Please close an application to load a new one." With virtual memory, the computer can look for areas of RAM that have not been used recently and copy them onto the hard disk. This frees up space in RAM to load the new application. Because it does this automatically, you don't even know it is happening, and it makes your computer feel like is has unlimited RAM space even though it has only 32 megabytes installed. Because hard-disk space is so much cheaper than RAM chips, virtual memory also provides a nice economic benefit.

The area of the hard disk that stores the RAM image is called a page file. It holds pages of RAM on the hard disk, and the operating system moves data back and forth between the page file and RAM. (On a Windows machine, page files have a .SWP extension.)

Of course, the read/write speed of a hard drive is much slower than RAM, and the technology of a hard drive is not geared toward accessing small pieces of data at a time. If your system has to rely too heavily on virtual memory, you will notice a significant performance drop. The key is to have enough RAM to handle everything you tend to work on simultaneously. Then, the only time you "feel" the slowness of virtual memory is in the slight pause that occurs when you change tasks. When you have enough RAM for your needs, virtual memory works beautifully. When you don't, the operating system has to constantly swap information back and forth between RAM and the hard disk. This is called thrashing, and it can make your computer feel incredibly slow.

Page 13: 98046890 SEM 3 BC0042 1 Operating Systems

Assignment Set – 2

Q1. What are the jobs of CPU scheduler? Explain any two scheduling algorithm.

CPU Scheduler

Whenever the CPU becomes idle, it is the job of the CPU Scheduler (a.k.a. the short-term scheduler) to select another process from the ready queue to run next. The storage structure for the ready queue and the algorithm used to select the next process are not necessarily a FIFO queue. There are several alternatives to choose from, as well as numerous adjustable parameters for each algorithm, which is the basic subject of this entire unit.

Preemptive Scheduling

CPU scheduling decisions take place under one of four conditions:

1. When a process switches from the running state to the waiting state, such as for an I/O request or invocation of the wait( ) system call.

2. When a process switches from the running state to the ready state, for example in response to an interrupt.

3. When a process switches from the waiting state to the ready state, say at completion of I/O or a return from wait( ).

4. When a process terminates.

For conditions 1 and 4 there is no choice – A new process must be selected. For conditions 2 and 3 there is a choice – To either continue running the current process, or select a different one. If scheduling takes place only under conditions 1 and 4, the system is said to be non-preemptive, or cooperative. Under these conditions, once a process starts running it keeps running, until it either voluntarily blocks or until it finishes. Otherwise the system is said to be preemptive. Windows used non-preemptive scheduling up to Windows 3.x, and started using pre-emptive scheduling with Win95. Macs used non-preemptive prior to OSX, and pre-emptive since then. Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt. It is to be noted that pre-emptive scheduling can cause problems when two processes share data, because one process may get interrupted in the middle of updating shared data structures.

Preemption can also be a problem if the kernel is busy implementing a system call (e.g. updating critical kernel data structures) when the preemption occurs. Most modern

Page 14: 98046890 SEM 3 BC0042 1 Operating Systems

UNIXes deal with this problem by making the process wait until the system call has either completed or blocked before allowing the preemption Unfortunately this solution is problematic for real-time systems, as real-time response can no longer be guaranteed. Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section. Needless to say, this should only be done in rare situations, and only on very short pieces of code that will finish quickly, ( usually just a few machine instructions. )

Dispatcher

The dispatcher is the module that gives control of the CPU to the process selected by the scheduler. This function involves:• Switching context.• Switching to user mode.• Jumping to the proper location in the newly loaded program.

The dispatcher needs to be as fast as possible, as it is run on every context switch. The time consumed by the dispatcher is known as dispatch latency.

Scheduling Algorithms

The following subsections will explain several common scheduling strategies, looking at only a single CPU burst each for a small number of processes. Obviously real systems have to deal with a lot more simultaneous processes executing their CPU-I/O burst cycles.

First-Come First-Serve Scheduling, FCFS

FCFS is very simple – Just a FIFO queue, like customers waiting in line at the bank or the post office or at a copying machine. Unfortunately, however, FCFS can yield some very long average wait times, particularly if the first process to get there takes a long time. For example, consider the following three processes:

Process Burst Time P1 24 P2 3 P3 3

In the first Gantt chart below, process P1 arrives first. The average waiting time for the three processes is (0 + 24 + 27) / 3 = 17.0 ms. In the second Gantt chart below, the same three processes have an average wait time of(0 + 3 + 6) / 3 = 3.0 ms. The total run time for the three bursts is the same, but in the second case two of the three finish much quicker, and the other process is only delayed by a short amount.

Page 15: 98046890 SEM 3 BC0042 1 Operating Systems

FCFS can also block the system in a busy dynamic system in another way, known as the convoy effect. When one CPU intensive process blocks the CPU, a number of I/O intensive processes can get backed up behind it, leaving the I/O devices idle. When the CPU hog finally relinquishes the CPU, then the I/O processes pass through the CPU quickly, leaving the CPU idle while everyone queues up for I/O, and then the cycle repeats itself when the CPU intensive process gets back to the ready queue.

Shortest-Job-First Scheduling, SJF

The idea behind the SJF algorithm is to pick the quickest fastest little job that needs to be done, get it out of the way first, and then pick the next smallest fastest job to do next. (Technically this algorithm picks a process based on the next shortest CPU burst, not the overall process time.). For example, the Gantt chart below is based upon the following CPU burst times, (and the assumption that all jobs arrive at the same time.)

Process

Burst Time

P1 6

P2 8 P3 7 P4 3

In the case above the average wait time is (0 + 3 + 9 + 16) / 4 = 7.0 ms, (as opposed to 10.25 ms for FCFS for the same processes.)

SJF can be proven to be the fastest scheduling algorithm, but it suffers from one important problem: How do you know how long the next CPU burst is going to be?• For long-term batch jobs this can be done based upon the limits that users set for their jobs when they submit them, which encourages them to set low limits, but risks their having to re-submit the job if they set the limit too low. However that does not work for short-term CPU scheduling on an interactive system.• Another option would be to statistically measure the run time characteristics of jobs, particularly if the same tasks are run repeatedly and predictably. But once again that really isn’t a viable option for short term CPU scheduling in the real world.• A more practical approach is to predict the length of the next burst, based on some historical measurement of recent burst times for this process. One simple, fast, and relatively accurate method is the exponential average, which can be defined as follows.

estimate[ i + 1 ] = alpha * burst[ i ] + ( 1.0 – alpha ) * estimate[ i ]• In this scheme the previous estimate contains the history of all previous times, and alpha serves as a weighting factor for the relative importance of recent data versus past history. If alpha is 1.0, then past history is ignored, and we assume the next burst will be the same length as the last burst. If alpha is 0.0, then all measured burst times are ignored, and we just

Page 16: 98046890 SEM 3 BC0042 1 Operating Systems

assume a constant burst time. Most commonly alpha is set at 0.5, as illustrated in Figure 5.3:

Fig. 5.3: Prediction of the length of the next CPU burst

SJF can be either preemptive or non-preemptive. Preemption occurs when a new process arrives in the ready queue that has a predicted burst time shorter than the time remaining in the process whose burst is currently on the CPU. Preemptive SJF is sometimes referred to as shortest remaining time first scheduling. For example, the following Gantt chart is based upon the following data:

Process Arrival Time Burst TimeP1 0 8 P2 1 4 P3 2 9 p4 3 5

The average wait time in this case is ( (5 – 3) + (10 – 1) + (17 – 2)) / 4 = 26 / 4 = 6.5 ms. (As opposed to 7.75 ms for non-preemptive SJF or 8.75 for FCFS.)

Page 17: 98046890 SEM 3 BC0042 1 Operating Systems

Q2. What do you mean by Deadlock? How can deadlock be prevented?

Introduction

Recall that one definition of an operating system is a resource allocator. There are many resources that can be allocated to only one process at a time, and we have seen several operating system features that allow this, such as mutexes, semaphores or file locks.

Sometimes a process has to reserve more than one resource. For example, a process which copies files from one tape to another generally requires two tape drives. A process which deals with databases may need to lock multiple records in a database.

A deadlock is a situation in which two computer programs sharing the same resource are effectively preventing each other from accessing the resource, resulting in both programs ceasing to function.

The earliest computer operating systems ran only one program at a time. All of the resources of the system were available to this one program. Later, operating systems ran multiple programs at once, interleaving them. Programs were required to specify in advance what resources they needed so that they could avoid conflicts with other programs running at the same time. Eventually some operating systems offered dynamic allocation of resources. Programs could request further allocations of resources after they had begun running. This led to the problem of the deadlock.

Deadlock Prevention

The difference between deadlock avoidance and deadlock prevention is a little subtle. Deadlock avoidance refers to a strategy where whenever a resource is requested, it is only granted if it cannot result in deadlock. Deadlock prevention strategies involve changing the rules so that processes will not make requests that could result in deadlock.

Here is a simple example of such a strategy. Suppose every possible resource is numbered (easy enough in theory, but often hard in practice), and processes must make their requests in order; that is, they cannot request a resource with a number lower than any of the resources that they have been granted so far. Deadlock cannot occur in this situation.

As an example, consider the dining philosophers problem. Suppose each chopstick is numbered, and philosophers always have to pick up the lower numbered chopstick before the higher numbered chopstick. Philosopher five picks up chopstick 4, philosopher 4 picks up chopstick 3, philosopher 3 picks up chopstick 2, philosopher 2 picks up chopstick 1. Philosopher 1 is hungry, and without this assumption, would pick up chopstick 5, thus causing deadlock. However, if the lower number rule is in effect, he/she has to pick up chopstick 1 first, and it is already in use, so

Page 18: 98046890 SEM 3 BC0042 1 Operating Systems

he/she is blocked. Philosopher 5 picks up chopstick 5, eats, and puts both down, allows philosopher 4 to eat. Eventually everyone gets to eat.

An alternative strategy is to require all processes to request all of their resources at once, and either all are granted or none are granted. Like the above strategy, this is conceptually easy but often hard to implement in practice because it assumes that a process knows what resources it will need in advance.

Page 19: 98046890 SEM 3 BC0042 1 Operating Systems

Q3. Explain the algorithm of peterson’s method for mutual exclusion.

Mutual exclusion by Peterson’s Method:

The algorithm uses two variables, flag, a boolean array and turn, an integer. A true flag value indicates that the process wants to enter the critical section. The variable turn holds the id of the process whose turn it is. Entrance to the critical section is granted for process P0 if P1 does not want to enter its critical section or if P1 has given priority to P0 by setting turn to 0.

flag[0]=false;

flag[1]=false;

turn = 0;

/* Process 0 */

while (true)

{

flag[0] = true;

turn = 1;

while(flag[1] && turn == 1)

/* no operation */;

/* critical section */;

flag[0] = false;

/* remainder */;

}

/* Process 1 */

while (true)

{

Page 20: 98046890 SEM 3 BC0042 1 Operating Systems

flag[1] = true;

turn = 0;

while(flag[0] && turn == 0)

/* no operation */;

/* critical section */;

flag[1] = false;

/* remainder */;

}

Page 21: 98046890 SEM 3 BC0042 1 Operating Systems

Q4. Explain how the block size is affected on I/O operation to read the file.

Figure-1 shows the general I/O structure associated with many medium-scale processors. Note that the I/O controllers and main memory are connected to the main system bus. The cache memory (usually found on-chip with the CPU) has a direct connection to the processor, as well as to the system bus.

Figure 1: A general I/O structure for a medium-scale processor system

Note that the I/O devices shown here are not connected directly to the system bus, they interface with another device called an I/O controller. In simpler systems, the CPU may also serve as the I/O controller, but in systems where throughput and performance are important, I/O operations are generally handled outside the processor.

Until relatively recently, the I/O performance of a system was somewhat of an afterthought for systems designers. The reduced cost of high-performance disks, permitting the proliferation of virtual memory systems, and the dramatic reduction in the cost of high-quality video display devices, have meant that designers must pay much more attention to this aspect to ensure adequate performance in the overall system.

Because of the different speeds and data requirements of I/O devices, different I/O strategies may be useful, depending on the type of I/O device which is connected to the computer. Because the I/O devices are not synchronized with the CPU, some information must be exchanged between the CPU and the device to ensure that the data is received reliably. This interaction

Page 22: 98046890 SEM 3 BC0042 1 Operating Systems

between the CPU and an I/O device is usually referred to as “handshaking”. For a complete “handshake,” four events are important:

• The device providing the data (the talker) must indicate that valid data is now available.• The device accepting the data (the listener) must indicate that it has accepted the data.

This signal informs the talker that it need not maintain this data word on the data bus any longer.

• The talker indicates that the data on the bus is no longer valid, and removes the data from the bus. The talker may then set up new data on the data bus.

• The listener indicates that it is not now accepting any data on the data bus. the listener may use data previously accepted during this time, while it is waiting for more data to become valid on the bus.

Note that each of the talker and listener supply two signals. The talker supplies a signal (say,

data valid, or DAV) at step (1). It supplies another signal (say, data not valid, or ) at step (3). Both these signals can be coded as a single binary value (DAV) which takes the value 1 at step (1) and 0 at step (3). The listener supplies a signal (say, data accepted, or DAC) at step (2). It

supplies a signal (say, data not now accepted, or ) at step (4). It, too, can be coded as a single binary variable, DAC. Because only two binary variables are required, the handshaking information can be communicated over two wires, and the form of handshaking described above is called a two wire Handshake. Other forms of handshaking are used in more complex situations; for example, where there may be more than one controller on the bus, or where the communication is among several devices. Figure 2 shows a timing diagram for the signals DAV and DAC which identifies the timing of the four events described previously.

Figure 2: Timing diagram for two-wire handshake

Either the CPU or the I/O device can act as the talker or the listener. In fact, the CPU may act as a talker at one time and a listener at another. For example, when communicating with a terminal screen (an output device) the CPU acts as a talker, but when communicating with a terminal keyboard (an input device) the CPU acts as a listener.

Page 23: 98046890 SEM 3 BC0042 1 Operating Systems

Q5. Explain programmed I/O and interrupt I/O. How they differ?

Interrupt-controlled I/O reduces the severity of the two problems mentioned for program-controlled I/O by allowing the I/O device itself to initiate the device service routine in the processor. This is accomplished by having the I/O device generate an interrupt signal which is tested directly by the hardware of the CPU. When the interrupt input to the CPU is found to be active, the CPU itself initiates a subprogram call to somewhere in the memory of the processor; the particular address to which the processor branches on an interrupt depends on the interrupt facilities available in the processor.

The simplest type of interrupt facility is where the processor executes a subprogram branch to some specific address whenever an interrupt input is detected by the CPU. The return address (the location of the next instruction in the program that was interrupted) is saved by the processor as part of the interrupt process.

If there are several devices which are capable of interrupting the processor, then with this simple interrupt scheme the interrupt handling routine must examine each device to determine which one caused the interrupt. Also, since only one interrupt can be handled at a time, there is usually a hardware “priority encoder” which allows the device with the highest priority to interrupt the processor, if several devices attempt to interrupt the processor simultaneously. In Figure -3, the “handshake out” outputs would be connected to a priority encoder to implement this type of I/O. the other connections remain the same. (Some systems use a “daisy chain” priority system to determine which of the interrupting devices is serviced first. “Daisy chain” priority resolution is discussed later.)

In most modern processors, interrupt return points are saved on a “stack” in memory, in the same way as return addresses for subprogram calls are saved. In fact, an interrupt can often be thought of as a subprogram which is invoked by an external device. If a stack is used to save the return address for interrupts, it is then possible to allow one interrupt the interrupt handling routine of another interrupt. In modern computer systems, there are often several “priority levels” of interrupts, each of which can be disabled, or “masked.” There is usually one type of interrupt input which cannot be disabled (a non-maskable interrupt) which has priority over all other interrupts. This interrupt input is used for warning the processor of potentially catastrophic events such as an imminent power failure, to allow the processor to shut down in an orderly way and to save as much information as possible.

Most modern computers make use of “vectored interrupts.” With vectored interrupts, it is the responsibility of the interrupting device to provide the address in main memory of the interrupt servicing routine for that device. This means, of course, that the I/O device itself must have sufficient “intelligence” to provide this address when requested by the CPU, and also to be initially “programmed” with this address information by the processor. Although somewhat more complex than the simple interrupt system described earlier, vectored interrupts provide such a significant advantage in interrupt handling speed and ease of implementation (i.e., a separate

Page 24: 98046890 SEM 3 BC0042 1 Operating Systems

routine for each device) that this method is almost universally used on modern computer systems.

Some processors have a number of special inputs for vectored interrupts (each acting much like the simple interrupt described earlier). Others require that the interrupting device itself provide the interrupt address as part of the process of interrupting the processor.

Page 25: 98046890 SEM 3 BC0042 1 Operating Systems

Q6. Explain briefly the architecture of Windows NT operating system.

Architecture of the Windows NT operating system line

The Windows NT operating system family’s architecture consists of two layers (user mode and kernel mode), with many different modules within both of these layers.

User mode in the Windows NT line is made of subsystems capable of passing I/O requests to the appropriate kernel mode software drivers by using the I/O manager. Two subsystems make up the user mode layer of Windows 2000: the Environment subsystem (runs applications written for many different types of operating systems), and the Integral subsystem (operates system specific functions on behalf of the environment subsystem). Kernel mode in Windows 2000 has full access to the hardware and system resources of the computer. The kernel mode stops user mode services and applications from accessing critical areas of the operating system that they should not have access to.

The Executive interfaces with all the user mode subsystems. It deals with I/O, object management, security and process management. The hybrid kernel sits between the Hardware Abstraction Layer and the Executive to provide multiprocessor synchronization, thread and interrupt scheduling and dispatching, and trap handling and exception dispatching. The microkernel is also responsible for initializing device drivers at bootup. Kernel mode drivers exist in three levels: highest level drivers, intermediate drivers and low level drivers. Windows Driver Model (WDM) exists in the intermediate layer and was mainly designed to be binary and source compatible between Windows 98 and Windows 2000. The lowest level drivers are either legacy Windows NT device drivers that control a device directly or can be a PnP hardware bus.

User mode

The user mode is made up of subsystems which can pass I/O requests to the appropriate kernel mode drivers via the I/O manager (which exists in kernel mode). Two subsystems make up the user mode layer of Windows 2000: the Environment subsystem and the Integral subsystem.

The environment subsystem was designed to run applications written for many different types of operating systems. None of the environment subsystems can directly access hardware, and must request access to memory resources through the Virtual Memory Manager that runs in kernel mode. Also, applications run at a lower priority than kernel mode processes. Currently, there are three main environment subsystems: the Win32 subsystem, an OS/2 subsystem and a POSIX subsystem.

Page 26: 98046890 SEM 3 BC0042 1 Operating Systems

The Win32 environment subsystem can run 32-bit Windows applications. It contains the console as well as text window support, shutdown and hard-error handling for all other environment subsystems. It also supports Virtual DOS Machines (VDMs), which allow MS-DOS and 16-bit Windows 3.x (Win16) applications to be run on Windows. There is a specific MS-DOS VDM which runs in its own address space and which emulates an Intel 80486 running MS-DOS 5. Win16 programs, however, run in a Win16 VDM. Each program, by default, runs in the same process, thus using the same address space, and the Win16 VDM gives each program its own thread to run on. However, Windows 2000 does allow users to run a Win16 program in a separate Win16 VDM, which allows the program to be preemptively multitasked as Windows 2000 will pre-empt the whole VDM process, which only contains one running application. The OS/2 environment subsystem supports 16-bit character-based OS/2 applications and emulates OS/2 1.x, but not 2.x or later OS/2 applications. The POSIX environment subsystem supports applications that are strictly written to either the POSIX.1 standard or the related ISO/IEC standards.

The integral subsystem looks after operating system specific functions on behalf of the environment subsystem. It consists of a security subsystem, a workstation service and a server service. The security subsystem deals with security tokens, grants or denies access to user accounts based on resource permissions, handles logon requests and initiates logon authentication, and determines which system resources need to be audited by Windows 2000. It also looks after Active Directory. The workstation service is an API to the network redirector, which provides the computer access to the network. The server service is an API that allows the computer to provide network services.

Kernel mode

Windows 2000 kernel mode has full access to the hardware and system resources of the computer and runs code in a protected memory area. It controls access to scheduling, thread prioritization, memory management and the interaction with hardware. The kernel mode stops user mode services and applications from accessing critical areas of the operating system that they should not have access to as user mode processes ask the kernel mode to perform such operations on its behalf.

Kernel mode consists of executive services, which are it made up on many modules that do specific tasks, kernel drivers, a microkernel and a Hardware Abstraction Layer, or HAL.

• Executive

The Executive interfaces with all the user mode subsystems. It deals with I/O, object management, security and process management. It contains various components, including the I/O Manager, the Security Reference Monitor, the Object Manager, the IPC Manager, the Virtual Memory Manager (VMM), a PnP Manager and Power Manager, as well as a Window Manager which works in conjunction with the Windows Graphics Device Interface (GDI). Each of these components exports a kernel-only

Page 27: 98046890 SEM 3 BC0042 1 Operating Systems

support routine allows other components to communicate with one another. Grouped together, the components can be called executive services. No executive component has access to the internal routines of any other executive component.

Each object in Windows 2000 exists in its own namespace. This is a screenshot from SysInternals’ WinObj

The object manager is a special executive subsystem that all other executive subsystems must pass through to gain access to Windows 2000 resources – essentially making it a resource management infrastructure service. The object manager is used to reduce the duplication of object resource management functionality in other executive subsystems, which could potentially lead to bugs and make development of Windows 2000 harder. To the object manager, each resource is an object, whether that resource is a physical resource (such as a file system or peripheral) or a logical resource (such as a file). Each object has a structure or object type that the object manager must know about. When another executive subsystem requests the creation of an object, they send that request to the object manager which creates an empty object structure which the requesting executive subsystem then fills in. Object types define the object procedures and any data specific to the object. In this way, the object manager allows Windows 2000 to be an object oriented operating system, as object types can be thought of as classes that define objects.

Each instance of an object that is created stores its name, parameters that are passed to the object creation function, security attributes and a pointer to its object type. The object also contains an object close procedure and a reference count to tell the object manager how many other objects in the system reference that object and thereby determines whether the object can be destroyed when a close request is sent to it. Every object exists in a hierarchical object namespace.


Recommended