+ All Categories
Home > Documents > JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

Date post: 19-Dec-2015
Category:
View: 217 times
Download: 0 times
Share this document with a friend
Popular Tags:
45
JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES
Transcript
Page 1: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

J O E L C R I C H L O WD I S T R I B U T E D S Y S T E M S : C O M P U T I N G O V E R N E T W O R K S , P H I

ACCESSING DISTRIBUTED RESOURCES

Page 2: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

AREAS

• Communication• Concurrency• Time• Failure• Transactions

Page 3: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

COMMUNICATION

• Remote Procedure Call (RPC)• Remote Method Invocation (RMI)• Message Passing

• MPI

• Sockets and Streams

Page 4: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

REMOTE PROCEDURE CALL (RPC)

RPC with five modules: Client, Client-stub, Server, Server-stub and Communications Package

SynchronousParameter marshalling

Client

Client

stub

Server

Server

stub Com. package

Com. package

Client Machine Server Machine

Page 5: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

SUN RPC

• It uses UDP and TCP/IP for packet exchange between client and server

• Remote procedures are owned by servers• These procedures are expressed to the clients in

an Interface Definition Language (IDL)• The IDL in Sun RPC is called XDR (eXternal Data

Representation)

Page 6: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

REMOTE METHOD INVOCATION (RMI)

• Object-oriented equivalent of RPC• Invocation of remote objects• Allows the passing of object references as

parameters• Java RMI

• From one JVM to another JVM• Remote objects must be declared• Remote objects registered with RMI registry• RMI passes a remote stub as a local representative of the

remote object

Page 7: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

MESSAGE PASSING

a b

Port 1

Port 2Kernel

In a message passing scheme, all communication is handled as message blocks which the OS kernel undertakes to transfer from a sender process to a receiver process. The kernel may store the message in intermediary buffers before delivering it to the receiver process The sender process issues a ‘send message’ command in which it identifies the receiver and the message. The receiver usually must issue an explicit ‘receive message’ command in which it identifies a sender and names a data area in its address space for the deposit of the message. Often the messages are sent to and received from established ports which must be used in the addressing

Page 8: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

MESSAGE PASSING

• In order to send a message to another process over the network, the message goes first to the port of the local ‘network server’ to be passed to the remote ‘network server’.

• The remote ‘network server’ then sends it to the port of the receiving process from where it is retrieved when the remote process issues the appropriate ‘receive message’ command.

Kernel Kernel

process process

Networkserver

Network

server

Page 9: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

MPI

• Point-to-point communications• One process can send a message containing typed data to

another process• Communicating processes register with a ‘communicator’ in MPI

• Collective communications• Transmits data among all the processes registered with a

communicator• Includes synchronization, broadcast, gather and scatter

operations

• Groups and contexts• Provide support for process group and group view

• MPI allows both blocking and non-blocking message passing

Page 10: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

SOCKETS AND STREAMS

• Interfacing with Network Protocol Suite• Issues that the interface must address

• How to associate a process or program with a communication channel?

• How to associate a process or program with a connection?

• How to associate a message with a communication channel?

• How to associate a message with a connection?• What data structure should represent a connection?• What data structure should represent a message?

Page 11: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

SOCKETS AND STREAMS

Socket as objectAttributes

Host-address

Port-no.

Queue-length

Methods

Socket Constructor

Bind

Connect

Listen

Accept

Send

Receive

Close

Page 12: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

SOCKETS AND STREAMS

A datagram objectAttributes

message-buffer

message-length

address

port-no

Methods

getAddress

getData

getLength

getPort

setAddress

setData

setLength

setPort

Page 13: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

CONCURRENCY

• Critical Sections• Distributed Deadlock• Timestamps• Two-Phase Lock• Replica Control

• Pessimistic• Optimistic

Page 14: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

CRITICAL SECTIONS

• The hardware test-and-set instruction allows a non-interruptible testing and setting of a lock variable

• Dijkstra’s wait and signal on a semaphore access the semaphore in a disable interrupt sequence

• Dekker’s and Peterson’s algorithms use ‘interested’ and ‘turn’ variables to control access to the critical section

• Monitors encapsulate the critical section in an identifiable program structure.

Page 15: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

CRITICAL SECTIONS

• Brinch Hansen (1978) proposed the use of guarded regions in his DP (Distributed Processes) language

• DP supported: • a fixed number of concurrent processes that are started

simultaneously and exist forever, with each process accessing its own variables

• processes can communicate with each other by calling common procedures defined within processes

• processes are synchronized by means of guarded regions.

Page 16: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

DISTRIBUTED DEADLOCK

Node 0 Node 2Node 1

p0 p3 p1 p2 r0 r1 r2 r3

p r

r p

waits for

is held by

Page 17: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

CONCURRENCY

• Timestamps• Time stamping is a mechanism for enforcing ordered

access to shared resources

• Two-Phase Lock• In the first phase a process must acquire locks on all the

required resources• In the second phase the locks are released• Centralized Lock Controller

Page 18: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

CONCURRENCY

• Replica Control• Pessimistic, Optimistic• Majority Consensus• Voting• Primary Node

Page 19: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

VOTING

Gifford (1979) proposed a voting scheme where each copy i of the replicated object has a number vi of votes and vi can vary across copies. To read that object a transaction must obtain a read quorum of r votes; and to write, a write quorum of w votes such that

V = vi , i = 1 … n,

r + w V, andw V/2

 where n is the number of copies of the object.

Page 20: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

CONCURRENCY

• Optimistic Schemes• Typing of Transactions• Version Vectors• Cost Bounds

Page 21: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

TIME

Logical Clocks• Happened before• Within any sequential process it can be observed that an

event a happened before an event b• If event a is the sending of a message from one process

and event b is the receiving of that message by another process then a happened before b

Page 22: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

LOGICAL CLOCKS

• Assign logical time C(a) to any event a, such that if a happened before b, then C(a) C(b)

• Each process in the distributed system should maintain a simple counter (a logical clock) which must be incremented (by at least one) on every event

• Logical clocks can run at different rates at separate sites• Any message a must be timestamped with the current time

C(a) of the sending process• On receiving the message its timestamp C(a) is compared

with the current time t of the receiver process• Since the send event ‘happened before’ the receive event,

then t must be greater than C(a). If this is not the case t must be corrected to read at least C(a) + 1

Page 23: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

LOGICAL CLOCKS

• In order to support total ordering no two events can have the same time

• The logical clock should tick at least once between events

• Time should be recorded as a concatenated timestamp generated by attaching to the count a decimal point followed by the process number

• For example, the logical clock at process 1 should show time like 0.1, 1.1, 2.1, …; and the logical clock at process 2 should show time like 0.2, 1.2, 2.2, and so on.

Page 24: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

TIME

• Physical Clocks• Universal Coordinated Time (UTC)• Time Server• Cristian’s algorithm• The Berkeley algorithm• Network Time Protocol (NTP)

Page 25: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

CRISTIAN’S ALGORITHM

• A node periodically obtains the time tserver from the time server

• The node will then compute what should be the correct time t where t = tserver + t

• t is a compensation based on the message transmission time

• If min the minimum one-way message transmission delay between node and server is known, and the measured round trip delay is round then t is in the range [min, round - min]

Page 26: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

THE BERKELEY ALGORITHM

• The time server polls the other nodes periodically to obtain their clock readings

• An estimate of round-trip delay to each node is considered in order to determine the current times shown by each of the polled nodes

• The time server then uses these times and its own clock’s time to compute an average time-of-day, ave.t

• All the clocks in the system must now be synchronized by using ave.t

• The time server estimates a difference dti (positive or negative) by which each node i must correct its time and sends this difference to the appropriate nodes

• Since a faulty clock can have an adverse effect on the computation of the average, the algorithm averages over a subset of readings that differ from each other only by a specified amount

Page 27: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

NETWORK TIME PROTOCOL - NTP

• Time servers in NTP form a logical hierarchy of top-level primary servers and lower-level secondary servers

• Servers at one level are synchronized from servers at the level above

• Primary servers listen directly to a UTC service• Statistical techniques are used to compensate for

clock drift and message transmission times.• Redundant servers exist to afford increased reliability• Authentication schemes are employed to establish

trusted time servers

Page 28: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

FAILURE

• Lost Messages• Failed Nodes

• Stateless Node• Atomic Update• Available Copies

• Partitioning

Page 29: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

LOST MESSAGES

• A message is considered lost by the sender if an outcome associated with the receipt of that message has not materialized

• The lost of messages is normally dealt with by setting time-out intervals and re-sending the message some number of times

• Sequentially numbering (or timestamping) the messages is a useful technique that can be employed to distinguish messages

Page 30: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

LOST MESSAGES

• What is the allowable range of sequence numbers?

• For how long can a message get stuck somewhere in the network and then turn up at your ‘doorstep’?

• How many timestamps of received messages will have to be saved in order to be able to spot the duplicate messages?

• What if a node crashes and loses its record of numbers?

Page 31: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

FAILED NODES

• Crash failure while participating in a ‘resource access’ operation

• Failed node is a coordinator or leader• Stateless node• Atomic update

• Two phase commit

• Available copies (AC) protocol• Allows read access to any copy and write access to all

available copies

Page 32: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

FAILURE

Partitioning

broken link

a

c

b

d e

failed

node

e d

c

a b

Page 33: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

FAILURE

Partitioning• Distinguished Partition• Quorum or Vote adjustment• Dynamic Voting• Dynamic Linear• Optimistic schemes

Page 34: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

FAILURE

• Partitioning• Quorum or vote adjustment

• Operable nodes change their vote or quorum assignments when they can no longer communicate with the entire network

• Consensus driven or autonomous reassignment of votes

Page 35: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

FAILURE

• Partitioning• Dynamic voting

• There is a version number associated with each copy of the replicated data area or file

• The version number is initially zero and is incremented by one at each update to the copy

• An integer variable called the ‘update sites cardinality’, equal to the number of sites that participated in the most recent update to the file, is also associated with each copy of the file

• Therefore, if 12 is the highest version number in a partition and the update sites cardinality corresponding to the copy with that version number is 5, there must be at least 3 sites in that partition with version number 12 to allow a further update

Page 36: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

FAILURE

• Dynamic linear• A third variable, ‘distinguished site’ is added to each site

holding a copy of the file• All the sites participating in an update must agree on a

site as the distinguished site• In the event that an even number of sites participated in

an update, and a subsequent partition contained half of those sites, dynamic linear allows updates to proceed in the partition with the distinguished site

• Optimistic schemes• May allow processing to continue in more than one

partition

Page 37: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

TRANSACTIONS

• Identification• Concurrent Transactions• Atomic Transactions• Two-phase Commit• Nested Transactions

Page 38: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

TRANSACTIONS

Identification

begin transaction

open file-x

read file-x

write file-x

close file-x

end transaction

Page 39: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

CONCURRENT TRANSACTIONS

• Problems• Lost Update• Uncommitted Dependency• Inconsistent analysis

• Serializability theory• Locks• Two-Phase Lock

Page 40: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

ATOMIC TRANSACTIONS

An atomic transaction either completes successfully or it has no effect

Shadow-page technique

FID Page map

entry

Page Old New

no. block block

File directory

Page map

Intentions log

(FID, page no.,

state of

transactions, etc)

Old page

New page

Page 41: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

ATOMIC TRANSACTIONS

An atomic transaction either completes successfully or it has no effect

Log technique

Old page

FID

Log record

(FID, page no.,

new page)

FID Page Log

no. record

Log file map

File map

Log file

Page 42: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

ATOMIC TRANSACTIONS

The ACID Test• Atomic• Consistent• Isolated• Durable

Page 43: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

TRANSACTIONS

Two-Phase Commit• Coordinator• First Phase

• A ‘go either way’ or ‘prepare’ message to all the participating sites

• A ‘ready’ or ‘OK’ message to the coordinator

• Second Phase• A ‘commit’ message to the cooperating sites.

Page 44: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

NESTED TRANSACTIONS

The children of a given parent transaction can run concurrently

begin parent-transaction

begin child-transaction0read file0-at-site0write file0-at-site0

end child-transaction0 

begin child-transaction1read file1-at-site1write file1-at-site1

end child-transaction1 

begin child-transaction2read file2-at-site2write file2-at-site2

end child-transaction2 end parent-transaction

Page 45: JOEL CRICHLOW DISTRIBUTED SYSTEMS: COMPUTING OVER NETWORKS, PHI ACCESSING DISTRIBUTED RESOURCES.

BASE METHODOLOGY

• Basically Available• Fast response

• Soft State Service• No durable memory

• Eventual consistency• OK to send optimistic responses


Recommended