+ All Categories
Home > Documents > Introduction to Multimedia :: Chapter 3 (BBA & BBM)

Introduction to Multimedia :: Chapter 3 (BBA & BBM)

Date post: 25-Aug-2014
Category:
Upload: -
View: 187 times
Download: 3 times
Share this document with a friend
Description:
For BBA and BBM Students
4
ARCHITECTURES AND ISSUES FOR DISTRIBUTED MULTIMEDIA SYATEMS Chapter : 3 Murugavel.KN, Assistant Professor, Dept of CSE, CORE-BIT Campus, RAK, UAE Page 1 Intramedia synchronization, Intermedia synchronization The time-sampled nature of digital video and audio, referred to as isochronous data, requires that delay and jitter be tightly bounded from the point of generation or retrieval to the point of presentation. This requirement is referred to as intramedia synchronization. If several continuous media streams are presented in parallel, potentially from different points of generation or retrieval, constraints on their relative timing relationships are referred to as intermedia synchronization. Both types of synchronization require coordinated design of the resource managers so that end-to-end synchronization can be met. Distributed Systems and Multimedia Systems Multimedia computing and communication systems provide mechanisms for end-to-end delivery and generation of multimedia data that meet QOS requirements of applications. Distributed multimedia systems add capabilities such as global name spaces, client/server computing, global clocks, and distributed object management. Such facilities enable the sharing of resource over a larger population of users. With the technology for multimedia computing, distributed services will be feasible over a wide area using broadband networks. SYNCHRONIZATION, ORCHESTRATION, AND QOS ARCHITECTURE A fundamental requirement for multimedia systems is to provide intramedia and intermedia synchronization. The intermediate subsystems involved in delivering a stream may introduce delay, jitter, and errors. Along any path these values are cumulative, and it is the cumulative delay, jitter, and error rate that must be managed to achieve the end-to-end QOS requirements. The management of collections of resource managers to achieve end-to-end synchronization is referred to as orchestration. QOS parameters are considered to be a basic tool in orchestration. The definition of QOS parameters which permit system-wide orchestration is referred to as a QOS architecture. Synchronization Synchronization is the coordinated ordering of events in time, and various mechanism and formalisms for synchronization have been developed, ranging from low-level hardware-based techniques to abstractions for concurrent programming languages. Systems using continuous media data do not require fundamentally new synchronization primitives, but do require consideration of two aspects of multimedia applications 1. synchronization events have real-time deadlines, and 2. failure to synchronize can be handled using techniques such as frame repetition or skipping such that the application can still continue to execute. For a single media element which has a presentation deadline t p , if the maximum end-to-end delay due to retrieval, generation, processing, transmission, etc., is D max , then the scheduling of the presentation steps must begin by time t p -D max . If the media object is a stream of elements, not necessarily isochronous, with deadlines {t pl , t p2 , t p3 ,...}, then the scheduling problem becomes meeting the sequence of deadlines {t p1 -D max , t p2 -D max , t p3 -D max ,...} for each object being presented. Any admissibility test which is to satisfy the synchronization requirement must consider the delay requirements of the application, i.e., D req < D max . If the average delay experienced per media element, D avg’ is less than D max , then additional capacity exists to schedule other media objects, though with increased probability of failure. If elements arrive prior to the presentation deadline D max , due to variations in system latencies, buffering is required to hold the element in reserve until time t pi . Due to the deadline specification, data errors in retrieval or transmission may not be correctable via re-retrieval or retransmission. Acceptable
Transcript
Page 1: Introduction to Multimedia :: Chapter 3 (BBA & BBM)

ARCHITECTURES AND ISSUES FOR DISTRIBUTED MULTIMEDIA SYATEMS Chapter : 3

Murugavel.KN, Assistant Professor, Dept of CSE, CORE-BIT Campus, RAK, UAE Page 1

Intramedia synchronization, Intermedia synchronization

The time-sampled nature of digital video and audio, referred to as isochronous data, requires that delay

and jitter be tightly bounded from the point of generation or retrieval to the point of presentation. This

requirement is referred to as intramedia synchronization.

If several continuous media streams are presented in parallel, potentially from different points of

generation or retrieval, constraints on their relative timing relationships are referred to as intermedia

synchronization.

Both types of synchronization require coordinated design of the resource managers so that end-to-end

synchronization can be met.

Distributed Systems and Multimedia Systems

Multimedia computing and communication systems provide mechanisms for end-to-end delivery and

generation of multimedia data that meet QOS requirements of applications. Distributed multimedia

systems add capabilities such as global name spaces, client/server computing, global clocks, and

distributed object management. Such facilities enable the sharing of resource over a larger

population of users. With the technology for multimedia computing, distributed services will be

feasible over a wide area using broadband networks.

SYNCHRONIZATION, ORCHESTRATION, AND QOS ARCHITECTURE

A fundamental requirement for multimedia systems is to provide intramedia and intermedia

synchronization. The intermediate subsystems involved in delivering a stream may introduce delay,

jitter, and errors. Along any path these values are cumulative, and it is the cumulative delay, jitter, and

error rate that must be managed to achieve the end-to-end QOS requirements.

The management of collections of resource managers to achieve end-to-end synchronization is referred

to as orchestration. QOS parameters are considered to be a basic tool in orchestration.

The definition of QOS parameters which permit system-wide orchestration is referred to as a QOS

architecture.

Synchronization

Synchronization is the coordinated ordering of events in time, and various mechanism and formalisms

for synchronization have been developed, ranging from low-level hardware-based techniques to

abstractions for concurrent programming languages.

Systems using continuous media data do not require fundamentally new synchronization primitives,

but do require consideration of two aspects of multimedia applications

1. synchronization events have real-time deadlines, and

2. failure to synchronize can be handled using techniques such as frame repetition or

skipping such that the application can still continue to execute.

For a single media element which has a presentation deadline tp, if the maximum end-to-end delay due

to retrieval, generation, processing, transmission, etc., is Dmax, then the scheduling of the presentation

steps must begin by time tp-Dmax. If the media object is a stream of elements, not necessarily

isochronous, with deadlines {tpl, tp2, tp3,...}, then the scheduling problem becomes meeting the sequence

of deadlines {tp1-Dmax, tp2-Dmax, tp3-Dmax,...} for each object being presented. Any admissibility test

which is to satisfy the synchronization requirement must consider the delay requirements of the

application, i.e., Dreq < Dmax. If the average delay experienced per media element, Davg’ is less than

Dmax, then additional capacity exists to schedule other media objects, though with increased probability

of failure.

If elements arrive prior to the presentation deadline Dmax, due to variations in system latencies,

buffering is required to hold the element in reserve until time tpi. Due to the deadline specification, data

errors in retrieval or transmission may not be correctable via re-retrieval or retransmission. Acceptable

Page 2: Introduction to Multimedia :: Chapter 3 (BBA & BBM)

ARCHITECTURES AND ISSUES FOR DISTRIBUTED MULTIMEDIA SYATEMS Chapter : 3

Murugavel.KN, Assistant Professor, Dept of CSE, CORE-BIT Campus, RAK, UAE Page 2

error rates are application and media dependent. In order to meet the requirements of schedulability of

a continuous media stream, each subsystem must provide a maximum delay with some probability p.

Further, in order to limit buffering requirements, the variation in delay, referred to as jitter, must also

be bounded.

Orchestration or Meta-Scheduling

Each resource manager includes a scheduling function which orders the current requests for servicing

so as to meet the required performance bounds. For example, a continuous media file system schedules

storage system access operations, and the network layer schedules traffic to the transport layer. An

application requires the coordinated operation of these scheduling functions if end-to-end performance

bounds are to be met. An approach to coordinating resource scheduling of the various systems is to add

a layer between the application and the resource managers for orchestration or meta-scheduling.

3.2.3 QOS Architecture

Quality of service (QOS) is used in the OSI reference model to allow service users to communicate

with network service regarding data transmission requirements. In OSI, QOS is specified using a

number of parameters which can be grouped into three sets: single transmission, multiple transmission,

and connection mode. QOS parameters include transit delay,

Figure real-time multimedia system architecture: (a) the orchestration layer as a middle layer between

the application and multimedia system services, (b) the orchestration layer as a service for providing

synchronization for a distributed computing system, and (c) the orchestration function, with vertical

arrows indicating control paths and horizontal arrows indicating data paths.

3.4 A FRAMEWORK FOR MULTIMEDIA SYSTEMS

The framework presented here provides an overall picture of the development of distributed

multimedia systems from which a system architecture can be developed. The framework highlights the

dominant feature of multimedia systems: the integration of multimedia computing and

communications, including traditional telecommunications and telephony functions.

Low-cost multimedia technology is evolving to provide richer information processing and

communications systems. These systems, though tightly interrelated, have distinct physical facilities,

logical models, and functionality. Multimedia information systems extend the processing, storage, and

retrieval capabilities of existing information systems by introducing new media data types, including

image, audio, and video. These new data types offer perceptually richer and more accessible

representations for many kinds of information. Multimedia communication systems extend existing

Point-to-Point connectivity by permitting synchronized multipoint group communications.

Additionally, the communication media include time-dependent visual forms as well as computer

application conferencing.

Page 3: Introduction to Multimedia :: Chapter 3 (BBA & BBM)

ARCHITECTURES AND ISSUES FOR DISTRIBUTED MULTIMEDIA SYATEMS Chapter : 3

Murugavel.KN, Assistant Professor, Dept of CSE, CORE-BIT Campus, RAK, UAE Page 3

Multimedia Distributed Processing Model

A layered view of the multimedia distributed processing model is shown in Figure 3.6. Models similar

to this have been published by the Interactive Multimedia Association in its Architecture Reference

Model and UNIX International's Open Distributed Multimedia Computing model. Each layer provides

services to the layers above. Significant additions to the facilities of traditional computing

environments include (from the top):

Scripting languages: Special-purpose programming languages for controlling interactive multimedia

documents, presentations, and applications.

Figure 3.5 Each of the four models of the distributed multimedia systems framework specifies various

components. Example components, which might be services, formats, and/or APIs are shown in the

periphery of the corresponding models.

Figure 3.3 Multimedia technology is facilitating

the convergence of multimedia information

processing systems and multimedia

communications systems.

Figure 3.4 The framework consists of four interrelated

models. The information and distributed processing

models constitute the Multimedia Information System

(MMIS). The conferencing and multiservice network

models form the Multimedia Communications

System(MCS).

Page 4: Introduction to Multimedia :: Chapter 3 (BBA & BBM)

ARCHITECTURES AND ISSUES FOR DISTRIBUTED MULTIMEDIA SYATEMS Chapter : 3

Murugavel.KN, Assistant Professor, Dept of CSE, CORE-BIT Campus, RAK, UAE Page 4

figure 3.6 Multimedia distributed processing model: a layered view of a distributed environment Media device control: A combination of toolkit functions, programming abstractions, and services

which provide application programs access to multimedia peripheral equipment.

Interchange: Multimedia data formats and services for interchange of multimedia content..

Conferencing services: Facilities for managing multiparty communications using high-level call

model abstractions.

Hypermedia engine: A hypermedia object server that stores multimedia documents for editing and

retrieval.

Real-time scheduler: Operating system process or thread scheduling so as to meet real-time deadlines.


Recommended