Date post: | 18-Jan-2016 |
Category: |
Documents |
Upload: | ross-bryant |
View: | 227 times |
Download: | 0 times |
June 6, 2002 Serguei A. Mokhov, [email protected]
1
Salsa Theory Debrief 2
COMP346 - Operating Systems
Tutorial 6
Edition 1.1, June 19, 2002
June 6, 2002 Serguei A. Mokhov, [email protected]
2
Topics• Device Drivers• Mutual Exclusion• Some Deadlock• Processes• fork() and exec() System Calls• Processes vs. Threads, Take II• Message Passing vs. System Calls
June 6, 2002 Serguei A. Mokhov, [email protected]
3
Device Drivers
1. Provide Common Interface (a layer of abstraction)
2. Execute privileged instructions
3. Part of the OS (Why? :-))
4. Have two halves:• Upper Half – Exposes 1. to the rest of OS
• Lower Half • Provides Device-Specific implementation of 1
• Interrupt handling
June 6, 2002 Serguei A. Mokhov, [email protected]
4
Device Drivers (2)
• Mapping independent I/O functions to device-dependent
open() close() read() write() …
Int handling, dev_open(), dev_close()…
Buffers to sync data between
the halves
Upper half. Works at
CPU’s speed
Lower Half. Operates at
device’s speed
June 6, 2002 Serguei A. Mokhov, [email protected]
5
Mutual Exclusion
• Possible Implementations:– General semaphores on top of counting ones– Using interrupts
• Drawbacks– ME violation– Starvation
June 6, 2002 Serguei A. Mokhov, [email protected]
7
Mutual Exclusion
• Binary semaphores were implemented as separate entities for efficiency reasons [1].
• General semaphores were using binary ones.
June 6, 2002 Serguei A. Mokhov, [email protected]
8
Mutual Exclusion
• What are the problems with the above?– Consider 2 V() and 2 P() operations executing
concurrently: PVPV
P() { V() { wait(mutex); wait(mutex); count – 1; count + 1; if (count < 0) { if (count <= 0) { signal(mutex); signal(delay); wait(delay); } } else { signal(mutex); signal(mutex); } }}
June 6, 2002 Serguei A. Mokhov, [email protected]
9
Mutual Exclusion
• One P’s ran and got interrupted.• Then one V ran till completion.• Then the second P did as the first one.• Then the second V ran till completion.• Then… ? Imagine 100 P’s and V’s…
P() { V() { wait(mutex); wait(mutex); count – 1; count + 1; if (count < 0) { if (count <= 0) { signal(mutex); signal(delay); wait(delay); } } else { signal(mutex); signal(mutex); } }}
June 6, 2002 Serguei A. Mokhov, [email protected]
10
Mutual Exclusion
• To avoid the possible problem of the previous solution, we introduce interrupt operations.
• Looks OK, everybody is happy. BUT… what about multiprocessor systems?
P() { V() { cli cli count – 1; count + 1; if (count < 0) { if (count <= 0) { wait(delay); signal(delay); } } sti sti} }
June 6, 2002 Serguei A. Mokhov, [email protected]
11
Some Scheduling
• The scheduler places processes blocked on a semaphore in a queue.
• If semaphore queues are handled using:– FIFO– Stack (LIFO)
• How about fairness and starvation properties of each method?
June 6, 2002 Serguei A. Mokhov, [email protected]
12
Memory Stuff
• Tutorials 7 and 8 from Paul and Tony
• Locality of Reference Model– Keep the pages for a given scope of code (e.g.
function), locality, with all instructions, variables (either local or global) used by these instructions in the memory to avoid thrashing.
– There can be several localities– Localities change over time, can overlap.
June 6, 2002 Serguei A. Mokhov, [email protected]
13
Memory Stuff
• Locality – Self Referencing Code (loops):– Prof. Aiman Hanna: “Locality in this context just
means that the process is referencing the same piece of code (that is why the page that is used will be used many times). The most obvious way to have code locality are loops, since the executed lines may go again and again. Since these lines belong to the same page(s), these pages will be kept (not considered for replacement). This is why LFU will behave best at that locality points.”
June 6, 2002 Serguei A. Mokhov, [email protected]
14
Deadlock
• A system:– Three concurrent processes– Five resources of the same type
• Each process needs to acquire three resources to terminate.
• Upon termination a process releases all resources back to the system.
• Is there a possibility of a deadlock?
June 6, 2002 Serguei A. Mokhov, [email protected]
20
Processes Again
• fork() and exec() – Review COMP229 slides from the past term.
June 6, 2002 Serguei A. Mokhov, [email protected]
21
Threads vs. Processes, Take II
• Do the threads really “speed up” process’ execution?– Define “speed up” and the kind of threads
(user, kernel, LWP) you’re talking about– Consider the task of the running program. Does
it exploit parallelism?
June 6, 2002 Serguei A. Mokhov, [email protected]
22
Threads vs. Processes, Take II
• In which circumstances one’s better off with processes rather than threads?
• Answer this:– What’s your task and how parallel is it?
– Robustness? (e.g. PostgreSQL RDBMS) What if a thread “crashes”?
– Complexity of implementation?
– Synchronization?
– Is really “speed up” achieved using threads?
June 6, 2002 Serguei A. Mokhov, [email protected]
23
Message Passing vs. System Calls
• Where does one outperform the other?
• Good question because they do different things. Message passing is an IPC facility, system calls have a wider spectrum of use (subset may be used in IPC, obviously).
• Consider this (assuming IPC):– Message passing services run in the user space outside of the kernel.
– System calls execute in the kernel on behalf of the process.
– Messages have limitations on size.
– System calls imply mode switch and more overhead
– Messages are not necessarily received immediately.
• Now looking at the above tell: which one is more secure? Which one is more flexible? What are the space/time requirements in each?