Post on 31-May-2020
transcript
Processes Management
• Process Concept
• Process Scheduling
• Operations on Processes
• Inter-process Communication
Process Concept
• An operating system executes a variety of programs:
– Batch system – jobs
– Time-shared systems – user programs or tasks
• Textbook uses the terms job and process almostinterchangeably
• Process – a program in execution; process executionmust progress in sequential fashion
• A process includes:
– program counter
– stack
– data section
Process in Memory
Process State
Process StateAs a process executes, it changes state
new: The process is being createdrunning: Instructions are beingexecutedwaiting: The process is waiting forsome event to occurready: The process is waiting to beassigned to a processterminated: The process has finishedexecution
Process Control Block (PCB)Each process is represented in OS by PCB
• Process state
• Program counter
• CPU registers
• CPU scheduling information
• Memory-management information
• Accounting information
• I/O status information
Also called as task control block.
Process Control Block (PCB)
CPU Switch From Process to Process
Process Scheduling
• Job queue – set of all processes in thesystem
• Ready queue – set of all processes residingin main memory, ready and waiting toexecute
• Device queues – set of processes waitingfor an I/O device
• Processes migrate among the variousqueues
Ready Queue And Various I/O Device Queues
Scheduling Queues
Queuing diagram representing representation of processscheduling
Schedulers
• Long-term scheduler (or job scheduler) –selects which processes should be broughtinto the ready queue
• Short-term scheduler (or CPU scheduler)– selects which process should beexecuted next and allocates CPU
• Process Mix
Addition of Medium Term Scheduling
Schedulers (Cont.)
• Short-term scheduler is invoked very frequently(milliseconds) (must be fast)
• Long-term scheduler is invoked very infrequently(seconds, minutes) (may be slow)
• The long-term scheduler controls the degree ofmultiprogramming
• Processes can be described as either:
– I/O-bound process – spends more time doing I/Othan computations, many short CPU bursts
– CPU-bound process – spends more time doingcomputations; few very long CPU bursts
Context Switch
• When CPU switches to another process, thesystem must save the state of the oldprocess and load the saved state for the newprocess
• Context-switch time is overhead; the systemdoes no useful work while switching
• Time dependent on hardware support
• State save, state restore
Operations on Processes
Process Creation
• Parent process create children processes,which, in turn create other processes, forming atree of processes
• Resource sharing– Parent and children share all resources
– Children share subset of parent’s resources
– Parent and child share no resources
• Execution– Parent and children execute concurrently
– Parent waits until children terminate
A tree of processes on a typical Solaris
Process Creation (Cont.)
• When a process creates a new process twopossibilities exists in terms of execution
– Parent continues to execute concurrently withits children
– Parent waits until some or all of its children haveterminated
• There are two possibilities in terms of theAddress space of a new process
– Child duplicate of parent
– Child has a program loaded into it
Process Creation
UNIX examples• fork system call creates new process• exec system call used after a fork toreplace the process’ memory space with anew program
Programs
• Process Creation in POSIX• Process Creation in win32• Process Creation in Java
Process Termination• Process executes last statement and asks the operating
system to delete it (exit)
– Output data from child to parent (via wait)
– Process’ resources are de allocated by operating system
• Parent may terminate execution of children processes(abort)
– Child has exceeded allocated resources
– Task assigned to child is no longer required
– If parent is exiting
• Some operating system do not allow child to continueif its parent terminates
– All children terminated - cascading termination
Inter process Communication• Independent and cooperating process
• Several reasons for providing an environmentthat allows process cooperation
– Information sharing
– Computation speedup
– Modularity
– Convenience
IPC has two basic models…
Inter process Communication
Message Passing Shared Memory
Shared memory systems
Producer-Consumer Problem
• Paradigm for cooperating processes,producer process produces informationthat is consumed by a consumer process
– unbounded-buffer places no practical limiton the size of the buffer
– bounded-buffer assumes that there is afixed buffer size
Simulating Shared Memory in Java
Message Passing• Message system – processes communicate with
each other without resorting to shared variables
• Message passing facility provides two operations:
– send(message) – message size fixed or variable
– receive(message)
• If P and Q wish to communicate, they need to:
– establish a communication link between them
– exchange messages via send/receive
• Implementation of communication link
– physical (e.g., shared memory, hardware bus)
– logical (e.g., logical properties)
Direct Communication
• Processes must name each other explicitly:
– send (P, message) – send a message to process P
– receive(Q, message) – receive a message fromprocess Q
• Properties of communication link
– Links are established automatically
– A link is associated with exactly one pair ofcommunicating processes
– Between each pair there exists exactly one link
– The link may be unidirectional, but is usually bi-directional
Indirect Communication
• Messages are directed and received from mailboxes(also referred to as ports)
– Each mailbox has a unique id
– Processes can communicate only if they share amailbox
• Properties of communication link
– Link established only if processes share a commonmailbox
– A link may be associated with many processes
– Each pair of processes may share severalcommunication links
– Link may be unidirectional or bi-directional
• Operations
– create a new mailbox
– send and receive messages through mailbox
– destroy a mailbox
• Primitives are defined as:
send(A, message) – send a message to
mailbox A
receive(A, message) – receive a message frommailbox A
• Mailbox sharing
– P1, P2, and P3 share mailbox A
– P1, sends; P2 and P3 receive
– Who gets the message?
• Solutions
– Allow a link to be associated with at most twoprocesses
– Allow only one process at a time to execute areceive operation
– Allow the system to select arbitrarily the receiver.Sender is notified who the receiver was.
Synchronization
• Message passing may be either blocking or non-blocking
• Blocking is considered synchronous
– Blocking send has the sender block until themessage is received
– Blocking receive has the receiver block until amessage is available
• Non-blocking is considered asynchronous
– Non-blocking send has the sender send themessage and continue
– Non-blocking receive has the receiver receive avalid message or null
Buffering
• Queue of messages attached to the link;implemented in one of three ways
1.Zero capacity – 0 messagesSender must wait for receiver (rendezvous)
2.Bounded capacity – finite length of n messagesSender must wait if link full
3.Unbounded capacity – infinite lengthSender never waits
Chapter 4: Threads• Overview
• Multithreading Models
• Threading Issues
• Pthreads
• Windows XP Threads
• Linux Threads
• Java Threads
33OS by JeevanandamJ, CSE
@HKBKCE
Single and Multithreaded Processes
34OS by JeevanandamJ, CSE
@HKBKCE
Benefits• Responsiveness
• Resource Sharing
• Economy
• Utilization of MP Architectures
User and Kernel Threads
• User threads - Thread management done byuser-level threads library.
• Kernel threads - Threads directly supported bythe kernel.
35OS by JeevanandamJ, CSE
@HKBKCE
Multithreading ModelsMapping user threads to kernel threads:
• Many-to-One
• One-to-One
• Many-to-Many
Many-to-One
• Many user-level threads mapped to singlekernel thread
• Examples:– Solaris Green Threads
– GNU Portable Threads
36OS by JeevanandamJ, CSE
@HKBKCE
Many-to-One & Many-many Model
37OS by JeevanandamJ, CSE
@HKBKCE
Many-to-Many Model• Allows many user level threads to be
mapped to many kernel threads
• Allows the operating system to createa sufficient number of kernel threads
• Solaris prior to version 9
• Windows NT/2000 with theThreadFiber package
38OS by JeevanandamJ, CSE
@HKBKCE
One-to-One• Each user-level thread maps to kernel thread
• Examples
– Windows NT/XP/2000,Linux,Solaris 9 and later
39OS by JeevanandamJ, CSE
@HKBKCE
Two-level Model• Similar to M:M, except that it allows a user
thread to be bound to kernel thread
• Examples
– IRIX,
– HP-UX,
– Tru64 UNIX,
– Solaris 8 & earlier
40OS by JeevanandamJ, CSE
@HKBKCE
Java Threads• Java threads are managed by the JVM
• Java threads may be created by:
– Implementing the Runnable interface
41OS by JeevanandamJ, CSE
@HKBKCE
Threading Issues
• Semantics of fork() and exec() systemcalls
• Thread cancellation
• Signal handling
• Thread pools
• Thread specific data
• Scheduler activations
42OS by JeevanandamJ, CSE
@HKBKCE
• Does fork() duplicate only the calling thread orall threads?
• Thread cancellation
– Terminating a thread before it has finished
– Two general approaches:
• Asynchronous cancellation terminates thetarget thread immediately
• Deferred cancellation allows the target threadto periodically check if it should be cancelled
43OS by JeevanandamJ, CSE
@HKBKCE
• Signal Handling– Signals are used in UNIX systems to notify a process that a particular
event has occurred
– A signal handler is used to process signals
1. Signal is generated by particular event
2. Signal is delivered to a process
3. Signal is handledOptions:
– Deliver the signal to the thread to which the signalapplies
– Deliver the signal to every thread in the process
– Deliver the signal to certain threads in the process
– Assign a specific threa to receive all signals for theprocess
OS by JeevanandamJ, CSE@HKBKCE
44
Thread Pools
• Create a number of threads in a pool wherethey await work
• Advantages:
– Usually slightly faster to service a request with anexisting thread than create a new thread
– Allows the number of threads in theapplication(s) to be bound to the size of the pool
45OS by JeevanandamJ, CSE
@HKBKCE
Thread Pools• Java provides 3 thread pool architectures:
1. Single thread executor - pool of size 1.
2. Fixed thread executor - pool of fixed size.
3. Cached thread pool - pool of unboundedsize
46OS by JeevanandamJ, CSE
@HKBKCE
Thread Specific Data
• Allows each thread to have its own copy ofdata
• Useful when you do not have control overthe thread creation process (i.e., when usinga thread pool)
Scheduler Activations
• Both M:M and Two-level models requirecommunication to maintain the appropriatenumber of kernel threads allocated to theapplication
47OS by JeevanandamJ, CSE
@HKBKCE
• Scheduler activations provide upcalls - acommunication mechanism from the kernel tothe thread library
• This communication allows an application tomaintain the correct number kernel threads
48OS by JeevanandamJ, CSE
@HKBKCE
Chapter 5: CPU Scheduling
• Basic Concepts
• Scheduling Criteria
• Scheduling Algorithms
• Multiple-Processor Scheduling
• Thread Scheduling
49OS by JeevanandamJ,
CSE @ HKBKCE
Basic Concepts• Maximum CPU utilization obtained with
multiprogramming
– CPU–I/O Burst Cycle
– CPU Scheduler
– Preemptive scheduling
– Dispatcher
CPU–I/O Burst Cycle
– Process execution consists of a cycle of CPUexecution & I/O wait
50OS by JeevanandamJ, CSE @
HKBKCE
Alternating Sequence of CPU and I/O Bursts
51OS by JeevanandamJ, CSE @
HKBKCE
Histogram of CPU-burst Times
52OS by JeevanandamJ, CSE @
HKBKCE
CPU Scheduler
• Selects from among the processes in memorythat are ready to execute, and allocates theCPU to one of them
Preemptive scheduling• CPU scheduling decisions may take place
when a process:1.Switches from running to waiting state2.Switches from running to ready state3.Switches from waiting to ready4.Terminates
• Scheduling under 1 and 4 is non preemptive• All other scheduling is preemptive
53OS by JeevanandamJ, CSE @
HKBKCE
Dispatcher
• It gives control of the CPU to the processselected by the short-term scheduler;this involves:
– switching context
– switching to user mode
– jumping to the proper location in the userprogram to restart that program
• Dispatch latency – time it takes for thedispatcher to stop one process and startanother running
54OS by JeevanandamJ, CSE @
HKBKCE
Scheduling Criteria• CPU utilization – keep the CPU as busy as possible
• Throughput – # of processes that complete theirexecution per time unit
• Turnaround time – amount of time to execute aparticular process
• Waiting time – amount of time a process has beenwaiting in the ready queue
• Response time – amount of time it takes from whena request was submitted until the first response isproduced, not output (for time-sharing environment)
55OS by JeevanandamJ, CSE @
HKBKCE
Optimization Criteria
• Max CPU utilization - Max throughput - Minturnaround time - Min waiting time - Minresponse time
Scheduling Algorithms1. First come, first served Scheduling (FCFS)
2. Shortest Job First Scheduling (SJF)
3. Priority Scheduling
4. Round Robin Scheduling (RR)
5. Multilevel queue Scheduling
6. Multilevel Feedback queue Scheduling
56OS by JeevanandamJ, CSE @
HKBKCE
FCFS Scheduling
• Managed by FIFO queue
Process Burst Time
P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 , P3The Gantt Chart for the schedule is:
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
P1 P2 P3
24 27 300
57OS by JeevanandamJ, CSE @
HKBKCE
Suppose that the processes arrive in the order
P2 , P3 , P1
• The Gantt chart for the schedule is:
• Waiting time for P1 = 6; P2 = 0; P3 = 3
• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case
• Convoy effect short process behind long process
P1P3P2
63 300
58OS by JeevanandamJ, CSE @
HKBKCE
SJF Scheduling• Associate with each process the length of its next CPU burst.
Use these lengths to schedule the process with the shortesttime
Two schemes:
• Non preemptive – once CPU given to the process it cannotbe preempted until completes its CPU burst
• Preemptive – if a new process arrives with CPU burst lengthless than remaining time of current executing process,preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF)
• SJF is optimal – gives minimum average waiting time for agiven set of processes
59OS by JeevanandamJ, CSE @
HKBKCE
Example of Non-Preemptive SJFProcess Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (non-preemptive)
• Average waiting time = (0 + 6 + 3 + 7)/4 = 4
P1 P3 P2
73 160
P4
8 12
60OS by JeevanandamJ, CSE @
HKBKCE
Example of Preemptive SJFProcess Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (preemptive)
• Average waiting time = (9 + 1 + 0 +2)/4 = 3
P1 P3P2
42 110
P4
5 7
P2 P1
16
61OS by JeevanandamJ, CSE @
HKBKCE
Determining Length of Next CPU Burst
• Can only estimate the length
• Can be done by using the length ofprevious CPU bursts, using exponentialaveraging
:Define4.
10,3.
burstCPUnexttheforvaluepredicted2.
burstCPUoflengthactual1.
1n
thn nt
.11 nnn t
62OS by JeevanandamJ, CSE @
HKBKCE
Prediction of the Length of the Next CPU Burst
63OS by JeevanandamJ, CSE @
HKBKCE
Examples of Exponential Averaging
• =0
– n+1 = n
– Recent history does not count
• =1
– n+1 = tn
– Only the actual last CPU burst counts
• If we expand the formula, we get:
n+1 = tn+(1 - ) tn -1 + …
+(1 - )j tn -j + …
+(1 - )n +1 0
• Since both and (1 - ) are less than or equal to 1, eachsuccessive term has less weight than its predecessor
64OS by JeevanandamJ, CSE @
HKBKCE
Priority Scheduling• A priority number (integer) is associated with each
process.The CPU is allocated to the process with thehighest priority (smallest integer highest priority)
– Preemptive
– Non preemptive
• SJF is a priority scheduling where priority is thepredicted next CPU burst time
• Problem Starvation – low priority processes maynever execute
• Solution Aging – as time progresses increase thepriority of the process
65OS by JeevanandamJ, CSE @
HKBKCE
Round Robin (RR)• Each process gets a small unit of CPU time (time
quantum), usually 10-100 milliseconds. After this time haselapsed, the process is preempted and added to theend of the ready queue.
• If there are n processes in the ready queue & the timequantum is q, then each process gets 1/n of the CPUtime in chunks of at most q time units at once. Noprocess waits more than (n-1)q time units.
• Performance
– q large FIFO
– q small q must be large with respect to contextswitch, otherwise overhead is too high
66OS by JeevanandamJ, CSE @
HKBKCE
Example of RR with Time Quantum = 20
Process Burst Time
P1 53
P2 17
P3 68
P4 24
• The Gantt chart is:
• Typically, higher average turnaround than SJF, but betterresponse
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
0 20 37 57 77 97 117 121 134 154 162
67OS by JeevanandamJ, CSE @
HKBKCE
Time Quantum and Context Switch Time
68OS by JeevanandamJ, CSE @
HKBKCE
Turnaround Time Varies With The Time Quantum
69OS by JeevanandamJ, CSE @
HKBKCE
Multilevel Queue• Ready queue is partitioned into separate queues:
foreground (interactive)background (batch)
• Each queue has its own scheduling algorithm
– foreground – RR
– background – FCFS
• Scheduling must be done between the queues
– Fixed priority scheduling; (i.e., serve all from foregroundthen from background). Possibility of starvation.
– Time slice – each queue gets a certain amount of CPU timewhich it can schedule amongst its processes; i.e., 80% toforeground in RR
– 20% to background in FCFS
70OS by JeevanandamJ, CSE @
HKBKCE
Multilevel Queue Scheduling
71OS by JeevanandamJ, CSE @
HKBKCE
Multilevel Feedback Queue• A process can move between the various queues; aging
can be implemented this way
• Multilevel-feedback-queue scheduler defined by thefollowing parameters:
– number of queues
– scheduling algorithms for each queue
– method used to determine when to upgrade aprocess
– method used to determine when to demote aprocess
– method used to determine which queue a processwill enter when that process needs service
72OS by JeevanandamJ, CSE @
HKBKCE
Example of Multilevel Feedback Queue• Three queues:
– Q0 – RR with time quantum 8 milliseconds
– Q1 – RR time quantum 16 milliseconds
– Q2 – FCFS
• Scheduling
– A new job enters queue Q0 which is served FCFS.When it gains CPU, job receives 8 milliseconds. If itdoes not finish in 8 milliseconds, job is moved toqueue Q1.
– At Q1 job is again served FCFS and receives 16additional milliseconds. If it still does not complete, itis preempted and moved to queue Q2.
73OS by JeevanandamJ, CSE @
HKBKCE
Multilevel Feedback Queues
74OS by JeevanandamJ, CSE @
HKBKCE
Multiple Processor Scheduling
OS by JeevanandamJ, CSE @HKBKCE
75
• CPU scheduling more complex when multiple CPUs areavailable• Homogeneous processors within a multiprocessor• Load sharingApproaches:• Asymmetric multiprocessing – only one processoraccesses the system data structures, alleviating the needfor data sharing. Master Processor concept
• SMP – all has individual queues & scheduling
Issues :• Processor Affinity• Load Balancing• Symmetric Multithreading
OS by JeevanandamJ, CSE @HKBKCE
76
Processor Affinity
• Migration issue from 1 processor to another• Cache is costly..if it happens• Types
• Soft affinity – option of going to other processor• Hard affinity – no option, processor is fixed.
Load Balancing• Only SMP architecture needs this• Approaches
• Push migration• Pull migration
Load balancing counter acts the processor affinity
OS by JeevanandamJ, CSE @HKBKCE
77
Symmetric multithreading• Alternative strategy to provide multiple logical-rather
than physical processors• Also known as hyperthreading tech on intel processors• In SMT, each logical processor has its own architecture
state• SMT is feature provided by H/W not by S/W• Optimization:
- Scheduler is to first allot process to separatephysical processors rather than allotting in the samephysical processor’s logical processor
Thread Scheduling
OS by JeevanandamJ, CSE @HKBKCE
78
• User threads are managed by thread library & kernel isunaware of them.
• To run on CPU user level threads has to be mappedwith associated kernel level threads.
• Mapping may be done using Light Weight Process• Contention Scope Thread library schedules user lever thread to run on
an available LWP is known as Process contentionscope (PCS)
To decide which kernel to schedule on to a CPU,the system uses System-Contention Scope(SCS)
PCS is done according to priority Preemption is allowed in PCS
OS by JeevanandamJ, CSE @HKBKCE
79
Pthread scheduling