+ All Categories
Transcript

VIDEO CONFERENCING

Presented by:

A.M.VigneshThis new world of geographical togetherness has been brought about, to a great extent, by man's scientific and technological genius. Man has been able to dwarf distance, place time in chains and carve highways through the stratosphere. Through our scientific genius, we have made the world a neighborhood

-Martin Luther King, Jr., December, 1956

Introduction:

When people from different locations wish to conduct a meeting, Video Conference facility provides a virtual feel as if all participants were physically there in one room. Through Video Conference people from any where can join in the meeting electronically. They can see each other, converse with each other, share power point presentations, share documents during the meeting, and have access to white board facility to aid the presenter to clarify any doubts the participants may have. Video conferencing feature permits exchange of Video and Audio between the participants in One-to-Many configuration. All participants can see the video image of the Presenter alongside his own (local) video.

Videoconferencing is an interactive method of communication. It can substitute for the actual physical presence of remote participants. This reduces travel costs as well as travel time and makes meeting attendance more convenient. Videoconferencing provides remote participants with much of the face-to-face familiarity that comes with physical presence.

Video conferencing in its most basic form is the transmission of image and speech, back and forth between two or more physically separate locations (Imagine a telephone call where you can see the speaker or a television through which you can talk). Two-way can be just sound or sound and images (moving images). Cameras simultaneously swap live pictures between the TV screens in each location. Cameras can be set to show the whole room zoom in on one person or focus on a whiteboard/overhead projector or a special document camera can be set up to send shots of papers, graphs or overheads. You can view pictures you are sending at the same time using the picture in picture function on the screen. Videotaping of the entire meeting is also available if taping is required. Three or more locations can be connected using multi point Videoconferencing.

These possibilities offer new opportunities for schools and adult education institutions, especially in the field of international contacts and distance learning.

At this stage videoconferencing is mostly used in higher education and for distance learning. The medium offers however also a lot opportunities for adult education, particularly when they are involved in international partnerships.

Until now videoconferencing is sometimes considered to be rather impractical because of high costs for hardware and communication, the limited quality of the images, a reduced contact with the audience in the remote locations(s) and the technical threshold for the user. However the hardware costs are dropping rapidly and the compression techniques are getting better. The user friendliness of the system is also increasing and the evolutions in the transmission of data (e.g. ISDN standard, IP) and the bandwidth of carriers are very promising for the future. In the light of these evolutions adult education institutions might reconsider the use of videoconferencing.

Pedagogically, a videoconferencing session should only be undertaken if there is an underpinning educational reason for doing so. Educators should not be driven by the excitement of the technology and the fact that communication in vision and sound can actually take place electronically. Where there is no plan, no content and a thin understanding of the technology there will be little benefit to staff and student alike.

Videoconferencing is expensive and most institutions are not in a position for buying such a system. Therefore it can be useful to find out whether or not there are institutions in your region (colleges, universities, private companies) with a videoconference system and willing to let you use it for a fair price.

History:

Simple analog videoconferences could be established as early as the invention of thetelevision. Such videoconferencing systems usually consisted of twoclosed-circuit televisionsystems connected viacoax cableorradio. An example of that was theGerman Reich Postzentralamt (Post Office) networkset up in Berlin and several other cities from 1936 to 1940.

During the first mannedspace flights,NASAused two radiofrequency (UHForVHF) links, one in each direction. TV channels routinely use this kind of videoconferencing when reporting from distant locations, for instance. Then mobile links tosatellitesusing specially equipped trucks became rather common.

This technique was very expensive, though, and could not be used for applications such astelemedicine,distance education, and business meetings. Attempts at using normaltelephonynetworks to transmit slow-scan video, such as the first systems developed byAT&T, failed mostly due to the poor picture quality and the lack of efficient videocompressiontechniques. The greater 1MHzbandwidthand 6 Mbit/sbit rateofPicture-phonein the 1970s also did not cause the service to prosper.

It was only in the 1980s thatdigital telephonytransmission networks became possible, such asISDN, assuring a minimumbit rate(usually 128 kilobits/s) for compressed video and audio transmission. During this time, there was also research into other forms of digital video and audio communication. Many of these technologies, such as theMedia space, are not as widely used today as videoconferencing but were still an important area of research.The first dedicated systems started to appear in the market as ISDN networks were expanding throughout the world. One of the first commercial Videoconferencing systems sold to companies came fromPicture-Tel Corp., which had an Initial Public Offeringin November, 1984.

From 1984 to 1989, Concept Communication, Inc. of the United States replaced the then 100 pound, US$100,000 computers necessary for teleconferencing with a patented, $12,000circuit boardwhich doubled the videoframe ratefrom 15 frames per second to 30 frames per second, and which was reduced in size to a circuit board that fit into standardpersonal computers.The company's founder,William J. Tobinalso secured a patent for acodecfor full-motion videoconferencing, first demonstrated atAT&T Bell Labsin 1986.

Videoconferencing systems throughout the 1990s rapidly evolved from very expensive proprietary equipment, software and network requirements to standards based technology that is readily available to the general public at a reasonable cost.

Finally, in the 1990s, IP (Internet Protocol) based videoconferencing became possible, and more efficient video compression technologies were developed, permitting desktop, or personal computer (PC)-based videoconferencing. In 1992CU-SeeMewas developed at Cornell by Tim Dorcey et al. In 1995 theFirst public videoconference and peace cast between the continents of North America and Africatook place, linking a techno fair inSan Franciscowith a techno-rave and cyberdeli inCape Town. At theWinter Olympicsopening ceremony inNagano,Japan, Seiji Ozawaconducted theOde to JoyfromBeethoven's Ninth Symphonysimultaneously across five continents in near-real time.

While videoconferencing technology was initially used primarily within internal corporate communication networks, one of the first community service usages of the technology started in 1992 through a unique partnership with Picture-Tel and IBM Corporations which at the time were promoting a jointly developed desktop based videoconferencing product known as the PCS/1. Over the next 15 years,Project DIANE(Diversified Information and Assistance Network) grew to utilize a variety of videoconferencing platforms to create a multistate cooperative public service and distance education network consisting of several hundred schools, neighborhood centers, libraries, science museums, zoos and parks, public assistance centers, and other community oriented organizations.

In the 2000s, video telephony was popularized via free Internet services such asSkypeandiChat, web plugins and on-line telecommunication programs which promoted low cost, albeit low-quality, video conferencing to virtually every location with an Internet connection.

In May 2005, the firsthigh definitionvideo conferencing systems, produced byLife Size Communications, were displayed at theInterpoltrade showinLas Vegas,Nevada, able to provide 30frames per secondat a 1280 by 720display resolution.Polycom introduced its first high definition video conferencing system to the market in 2006. Currently, high definition resolution has now become a standard feature, with most major suppliers in the videoconferencing market offering it.

Recent technological developments byLibrestreamhave extended the capabilities of video conferencing systems beyond the boardroom for use withhand-held mobile devicesthat combine the use of video, audio and on-screen drawing capabilities broadcasting in real-time over secure networks, independent of location.Mobile collaborationsystems allow multiple people in previously unreachable locations, such as workers on an off-shore oil rig, the ability to view and discuss issues with colleagues thousands of miles away.

Standards:

TheInternational Telecommunications Union(ITU) (formerly: Consultative Committee on International Telegraphy and Telephony (CCITT)) has some of standards for videoconferencing.

Figure: TheTandberg E20is an example of aSIP-only device. Such devices need to route calls through a Video Communication Server to be able to reachH.323systems, a process known as "interworking". H.310 Broadband over Asynchronous Transfer Mode (ATM) networks.

H.320 Narrowband over Integrated Services Digital Network (N-ISDN). H.321 Videoconferencing using ATM connections. H.322 Narrowband over LANs (Local Area Networks) that provide Guaranteed Quality of Service (QoS), an improved method of Internet Protocol (IP) transmission.

H.323 Narrowband over Packet Based Networks (IP), that do not provide guaranteed QoS.

H.324 Very narrow bandwidth over the existing General Switched Telephone Network (GSTN).

SIP Multimedia multicast transmissions over IP Currently used more in Voice over IP transmissions, but slowly moving into the videoconferencing world. H.264 SVC (Scalable Video Coding)

ITUH.320is known as the standard for public switched telephone networks (PSTN) or videoconferencing over integrated services digital networks. While still prevalent in Europe, ISDN was never widely adopted in the United States of America and Canada.

H.264 SVC is a compression standard that enables video conferencing systems to achieve highly error resilient IP video transmission over the public Internet without quality of service enhanced lines. This standard has enabled wide scale deployment of high definition desktop video conferencing and made possible new architectures which reduce latency between transmitting source and receiver, resulting in fluid communication without pauses.

In addition, an attractive factor for IP videoconferencing is that it is easier to set-up for use with a live videoconferencing call along withweb conferencingfor use indata collaboration. These combined technologies enable users to have a much richermultimediaenvironment for live meetings, collaboration and presentations.

ITU V.80 Videoconferencing is generally compactable with H.324 standard point-to-point video telephony over regular phone lines.Four basic types of endpoints: Room systems

Desktop systems

Software-based systems

Tele-presence Systems

Room Systems:

All come with an intuitive GUI Interface

Almost all use remote controls or some other external interface

Most have one or more external microphone

Most hide the administrative features from the end userMany will password protect the administrative interface to avoid users changing settingsRoom Systems Examples: Polycom

VSX line

HDX line

Tandberg

Set-top Series

Edge Series

LifeSize

No support for H.261 video

VTEL IPanel

Desktop Endpoints: Have built-in processors to handle some of the video encoding

Most will rely on your PCs monitor or will have a built-in monitor. Some with the built in monitor can take the place of your current monitor or be used for dual screen

Best to use only if you have one to three people at your site

Very few have external inputs for VGA, external cameras, etc.

Many have strong arm focusing which can be difficult to get the focus exactly correct

Becoming less popular and expensive compared to software endpoints on todays faster processors

Most desktop endpoints with built in monitor are aimed at the Executive levelDesktop Endpoints Examples: Polycom

VSX 3000

V700

HDX 4000

Tandberg 1000

Software Endpoints:

Most work only with Windows OS

Rely on your monitor for displaying video

Use USB or Firewire webcam for capturing video Most software packages run in the $150 per endpoint range and offer a free trial period download.Software Endpoints Examples: Polycom PVX

www.polycom.com Xmeeting

http://xmeeting.sourceforge.net/ RADVISION eConf

www.radvision.com Tandberg Movi

http://www.tandberg.com/products/pc_videoconferencing.jsp

Telepresence Setups (H.323): Multiple systems working together

Polycom

Tandberg

Lifesize

Specific room setup

Illusion of one single room

Techonology:

Dual display: An olderPolycom VSX 7000system and camera used for videoconferencing, with two displays for simultaneous broadcast from separate locations.

The core technology used in a video conferencing system is digital compression of audio and video streams in real time. Thehardwareorsoftwarethat performs compression is called acodec (coder/decoder). Compression rates of up to 1:500 can be achieved. The resulting digital stream of 1s and 0s is subdivided into labeledpackets, which are then transmitted through adigital network of some kind (usuallyISDNorIP). The use of audiomodemsin the transmission line allow for the use ofPOTS, or the Plain Old Telephone System, in some low-speed applications, such as video telephony, because they convert the digital pulses to/from analog waves in the audio spectrum range.

The other components required for a videoconferencing system include:

Video input:video cameraorwebcam Video output:computer monitor,televisionorprojector Audio input:microphones, CD/DVD player, cassette player, or any other source of Pre Amp audio outlet.

Audio output: usuallyloudspeakersassociated with the display device or telephone

Data transfer: analog or digital telephone network,LANorInternet Computer: a data processing unit that ties together the other components, does the compressing and decompressing, and initiates and maintains the data linkage via the network.

There are basically two kinds of videoconferencing systems:

1. Dedicated systemshave all required components packaged into a single piece of equipment, usually a console with a high qualityremote controlledvideo camera. These cameras can be controlled at a distance to pan left and right, tilt up and down, and zoom. They became known as PTZ cameras. The console contains all electrical interfaces, the control computer, and the software or hardware-based codec. Omnidirectional microphones are connected to the console, as well as a TV monitor with loudspeakers and/or avideo projector. There are several types of dedicated videoconferencing devices:

1. Large group videoconferencing is non-portable, large, more expensive devices used for large rooms and auditoriums.2. Small group videoconferencing is non-portable or portable, smaller, less expensive devices used for small meeting rooms.3. Individual videoconferencing are usually portable devices, meant for single users, have fixed cameras, microphones and loudspeakers integrated into the console.

2. Desktop systemsare add-ons (hardware boards, usually) to normal PCs, transforming them into videoconferencing devices. A range of different cameras and microphones can be used with the board, which contains the necessary codec and transmission interfaces. Most of the desktops systems work with theH.323standard. Videoconferences carried out via dispersed PCs are also known as e-meetings.

Conferencing layers:

The components within a Conferencing System can be divided up into several different layers: User Interface, Conference Control, Control or Signal Plane and Media Plane.

Video Conferencing User Interfaces could either be graphical or voice responsive. Many of us have encountered both types of interfaces; normally we encounter graphical interfaces on the computer or television, and Voice Responsive we normally get on the phone, where we are told to select a number of choices by either saying it or pressing a number. User interfaces for conferencing have a number of different uses; it could be used for scheduling, setup, and making the call. Through the User Interface the administrator is able to control the other three layers of the system.

Conference Control performs resource allocation, management and routing. This layer along with the User Interface creates meetings (scheduled or unscheduled) or adds and removes participants from a conference.

Control (Signaling) Plane contains the stacks that signal different endpoints to create a call and/or a conference. Signals can be, but arent limited to, H.323 and Session Initiation Protocol (SIP) Protocols. These signals control incoming and outgoing connections as well as session parameters.

The Media Plane controls the audio and video mixing and streaming. This layer manages Real-Time Transport Protocols, User Datagram Packets (UDP) and Real-Time Transport Control Protocols (RTCP). The RTP and UDP normally carry information such the payload type which is the type of codec, frame rate, video size and many others. RTCP on the other hand acts as a quality control Protocol for detecting errors during streaming. Multipoint videoconferencing:

Simultaneous videoconferencing among three or more remote points is possible by means of aMultipoint Control Unit(MCU). This is a bridge that interconnects calls from several sources (in a similar way to the audio conference call). All parties call the MCU unit, or the MCU unit can also call the parties which are going to participate, in sequence. There are MCU bridges for IP and ISDN-based videoconferencing. There are MCUs which are pure software, and others which are a combination of hardware and software. An MCU is characterized according to the number of simultaneous calls it can handle, its ability to conduct transposing of data rates and protocols, and features such as Continuous Presence, in which multiple parties can be seen on-screen at once. MCUs can be stand-alone hardware devices, or they can be embedded into dedicated videoconferencing units.

The MCU consists of two logical components:

1. A single multipoint controller (MC), and

2. Multipoint Processors (MP) sometimes referred to as the mixer.

The MC controls the conferencing while it is active on the signaling plane, which is simply where the system manages conferencing creation, endpoint signaling and in-conferencing controls. This component negotiates parameters with every endpoint in the network and controls conferencing resources while the MC controls resources and signaling negotiations, the MP operates on the media plane and receives media from each endpoint. The MP generates output streams from each endpoint and redirects the information to other endpoints in the conference.

Some systems are capable of multipoint conferencing with no MCU, stand-alone, embedded or otherwise. These use a standards-based H.323 technique known as "decentralized multipoint", where each station in a multipoint call exchanges video and audio directly with the other stations with no central "manager" or other bottleneck. The advantages of this technique are that the video and audio will generally be of higher quality because they don't have to be relayed through a central point. Also, users can make ad-hoc multipoint calls without any concern for the availability or control of an MCU. This added convenience and quality comes at the expense of some increased network bandwidth, because every station must transmit to every other station directly. Videoconferencing modes:

Videoconferencing systems have several common operating modes that are used:

1. Voice-Activated Switch (VAS).2. Continuous Presence.

In VAS mode, the MCU switches which endpoint can be seen by the other endpoints by the levels of ones voice. If there are four people in a conference, the only one that will be seen in the conference is the site which is talking; the location with the loudest voice will be seen by the other participants.

Continuous Presence mode displays multiple participants at the same time. The MP in this mode puts together the streams from the different endpoints and puts them all together into a single video image. In this mode, the MCU normally sends the same type of images to all participants. Typically these types of images are called layouts and can vary depending on the number of participants in a conference. Echo cancellation:

A fundamental feature of professional videoconferencing systems isAcoustic Echo Cancellation(AEC). Echo can be defined as the reflected source wave interference with new wave created by source. AEC is analgorithmwhich is able to detect when sounds or utterances reenter the audio input of the videoconferencing codec, which came from the audio output of the same system, after sometime delay. If unchecked, this can lead to several problems including:

1. the remote party hearing their own voice coming back at them (usually significantly delayed)

2. strongreverberation, rendering the voice channel useless as it becomes hard to understand and

3. Howling created by feedback. Echo cancellation is a processor-intensive task that usually works over a narrow range of sound delays.

Basis requirements for video conferencing:

In order to videoconference you need audio-visual hardware (camera, monitor, speakers ) the necessary software and a carrier for transmitting the information.1. Hardware

There are three main types of videoconferencing systems: room systems, compact systems and desktop systems

Room (roll-about) systems are big videoconferencing systems standing in a room, usually containing one or two TV monitors (displaying outgoing and incoming information), a camera (with pan, tilt and zoom options) on top of it and a codec. The system is mostly used in bigger rooms and for a large audience, usually in a distance learning programmer with a remote guest speaker. This type of system usually works with a 1 - 3 ISDN connection and has good image and sound quality. Recently there is compatibility with broadband IP (Internet Protocol) videoconferencing as well.

Compact videoconferencing systems are small boxes (the size of a video recorder) containing the codec and more or less enlarged possibilities for connections: video, audio and data input and output and different types of network connections. Most systems have a camera (with pan, tilt and zoom options) on top. The system can be used as a mobile system or as the core system in an existing audio/video setting. These systems usually use ISDN protocol but evolve towards IP.

Desktop systems: A multi-media personal computer with special hardware and software is another (and cheaper) option. The monitor is smaller and therefore not suitable for a large audience. Also the small camera on top of the monitor can capture 2 to max. 5 people with workable quality images. The incoming and outgoing information is displayed in two windows on the screen with picture in picture options. This type of system is often used with 1 ISDN connection and as a consequence can only transmit less information. IP videoconferencing with webcams offers cheaper connections.

Extras: All systems can be embedded in extra video and audio equipment.

Audio: extra microphones and speakers can be added. Existing audio-mixers and hi-fi equipment in the room can be connected to the system (beware for echo and sound loops). You can of course also have audio input via cassette deck and CD-player.

Video: One can add extra cameras. Especially for the desktop systems it is wise to add an extra fixed camera with pan, tilt and zoom options (and presets) or/and a handy video camera. Thanks to a switch box the number of extra cameras can go up to four or more.

In all cases add a document camera. This is a camera fixed on a stand with extra lights and a zoom system. A document camera is very useful to show documents, pictures and small objects. Also a video player can be added. Both systems can be connected to a data/video projector.

Data: In most cases a computer can be added to the system in order to give a power point presentation for a remote audience. If you have desktop systems on both sides you can also do file sharing (white boarding).

2. Compression:

Since the bandwidth of the connection usually is limited it is important to reduce the information you send through the wires. Compression techniques are based on limiting the number of frames/sec (ranging from 5-30 frames/sec), eliminating redundant information e.g. sending only the differences with the previous frame - or reducing the resolution or the size of the images. It is the codecs (stands for coder-decoder) task to do the digitizing and compression of the analogue information that is going to be sent and to do the decompression and de-digitizing of the received data. This, together with the transmission time over long distances, causes a time delay, an element of videoconferences that has to be taken into account. A smaller band-width and higher compression reduces of course the quality of the images and it is important in all cases to inform your audience that they will not deal with full motion video quality.

3. Transmission:

At this stage most (room and compact) videoconferencing systems use ISDN (Integrated Services Digital Network) working over regular telephone lines in 1, 2 or 3 pairs of so called B channels with a bandwidth of 128 Kbps per pair.

3 ISDN lines (384 Kbps) deliver very good quality images and sound. 1 ISDN line (used for most desktop systems) delivers good workable material but requires preparation and correct behavior.Audio/Video Transmitter:The transmitter module is a JMF/RTP based application. JMF manager class is used to create a merged data source object for audio and video. This data source object is passed to the manager class to create a processor object. While the Processor is in the configured state, its track control objects are obtained for the individual audio and video tracks.

Codecs for compression and encryption are set to both of the audio and video tracks. After setting the codecs both of the tracks, processors realize method is called. While the processor is in realized state the output data source is created from the processor. The output data source object is passed to the RTP manager object to create RTP sessions for audio and video transmission. We developed our own RTP connecter class to send RTP and RTCP packets to the remote participants over the network by encapsulating them in UDP packets and to receive RTP and RTCP packets from the remote participants. Since we are not using RTCP for getting feedbacks from the receivers, we do not transmit RTCP data all the time. From our RTP connector class we can control the amount of RTCP data to be transmitted. An object of RTP connector class is passed to RTP manager class at the time of creation RTP manager object.

Video Transmitter:We use IBMs implementation of H.263 encoder for video compression. We developed the CODECs for encryption and decryption the audio/video streams using Triple-DES encrypted of Javas crypto library. Video frames are generated by data source in YUV format. Each YUV frame is compressed to H.263 frame. The compressed H.263 frames are encrypted using Triple- DES encrypted and are passed to the RTP manager object. RTP manager encapsulates the encrypted H.263 frames in the RTP packets and passes them to the RTP connector object which in send the RTP packets to the remote participants by encapsulating them in UDP packets.

RTP connector sends a separate UDP packet for each RTP packet to each of the remote participant. We can add or remove a recipient dynamically. H.263 encoder provides some controls like bit Rate Control, key Frame Rate Control etc., that we can use to control the QoS dynamically during the transmission.

Video TransmitterAudio Transmitter:We use IBMs implementation of G.723 encoder for audio compression. G.723 is very efficient compression technique. Using it we can transmit good quality audio data of human voice at 6 to 7 kbps. Since G.723 encoder generates very small sized packets G.723 packetizer is used to format packets of size 48 byte. The Encryption is done after the packetizer. Rest of the process is exactly same as for video transmission.

Audio TransmitterAudio/Video Receiver:Video Receiver:At the receiver side we get Encrypted H.263 frame from the RTP Manager. It is decrypted using Triple-DES decrypter and then decoded back into YUV format using Suns implementation of H.263 decoder. YUV frame converted into RGB frame and it is then displayed into the receivers window using direct video renderer.

Video ReceiverAudio Receiver:It is same as video receiver except that it uses G.723 decoder and sound renderer.

Audio ReceiverClient-Server Architecture:The system includes a centralized server and distributed client. Distribution of audio and video data between the client nodes is done using point-to-point connections between clients. The server manages the connection between the clients. It server is also responsible of distribution of session key for encryption of audio video streams and controlling the QoS.

Client server authentication

Functions of the server:Registration of the users: Each user must be registered at the server in order to take part in the conference. At the time of registration user will provide his information to the server along with his/her public key. User will get a unique user-id and servers public key. These public keys will be used at the time of authentication.

Authentication of users: At the time of joining a conference by a user the server and the user, both will authenticate each other using their private-public key pairs.

Distribution of the session key: After authenticating a user, the server will send the session key to the client application running at the user side in a secure manner. This session key will be common for all users taking part in a particular conference.

Maintaining the state of the conference: Whenever a user will join or leave a conference the server will inform the other users of that conference and provide them necessary information in order to maintain the state of the conference.

Controlling QoS: Server will also responsible of controlling quality of service (QoS) of the audio/video data exchanged between the clients. In order to do that, the server gets the feedbacks from the clients about every media stream they are receiving. For each client, the server maintains QoS statistics from the feedbacks of all receivers of that client and whenever required it sends the QoS control signals to the client to maintain the quality of media stream being sent by that client.

Functions of the client:Session setup: The client application will interact with the server to get the session key and information of other users to setup a conferencing session.

Capturing the audio/video stream: The client application will be responsible of capturing users real time audio and video streams from the capturing devices.

Compression and Encryption of the audio/video streams: The client will compress the audio and video streams to reduce the bandwidth requirements. It will also encrypt each packet of the audio video stream before transmission using the session key got from the server.

Creation of RTP session: The client application will create RTP sessions for transmitting real time audio/video streams of the users to other users taking part in the conference. Two separate sessions will be created for audio and video.

Opening RTP sessions: The client application will open RTP sessions for each user whose audio/video streams the user wants to receive.

Decryption and Decompression: Decryption and Decompression will be done both audio and video streams of each user to get the streams in their original form.

Rendering the audio/video stream: Finally the incoming audio and video streams will be rendered and will be sent to the speakers and display unit respectively.

Maintaining QoS: The client application will monitor the incoming audio/video streams and will send the feedbacks to the server. It will change the parameters of the media streams being transmitted accordingly whenever it will get the QoS controls from the server for example reducing the bit rate.

Security protocols:The RSA public key algorithm (with 2048 bit key) for authentication of the users at the server and Triple-DES symmetric key algorithm for encrypting audio video streams by the clients before transmission. The symmetric key (called session key) is generated by the server at commencement of every new conference and is distributed in the clients. So every client taking part in a particular conference will have the same session key. Both encryption and decryption is done using same key.

Registration of the users:Every user must be registered at server before they can take part in any communication.

Since we dont have any secure channel to pass the users authentic information to the server via network we assume that the user itself will come at the server room and the registration will be done in the presence of the administrator of the server. The user will required to generate a RSA key pair on his/her own computer and will bring the public part of the RSA key pair at the server. At the time of registration the user will submit his/her public key and will get a unique user id and the servers public key.

Authentication and key distribution:In the first message time stamp is used to provide freshness guarantee of the users request in order to guard against replay attack. The challange1 and challenge2 are long random numbers freshly generated by the client and the server respectively. After getting challenge1 back from the server, the client ensures the servers authenticity because nobody other than server can extract the challenge1 from first message. Similarly the server ensures the authenticity of client after getting challenge2 back. After first three messages both the client and the server have authenticated each other. Authentication protocol

In the fourth message server sends the session key to the user. Users challenge (challenge1) is included in the fourth message to prevent replay attack. Every subsequent message from this client to the server includes challange2 and every subsequent message from the server to this client will include challenge1 because these random numbers are freshly generated for current session and ensure the client and the server that message having these number cant be a copy of message of some previous session and hence prevent the replay attack.

Maintaining the state of the conference:Creating or joining a conference:Every registered user can create a new conference at any time. There can be multiple conferences running at the same time. Following are the steps for creating a new conference:1. Client program sends connect request to the server and both the client and server authenticate each other.2. Server sends list of conferences already running. User can select a conference to join or he/she can choose to create a new conference. Client sends the users request to the server.

3. If the request is for a new conference, the server starts a new conference thread and adds that user in the conference. Server then generates a new session key for this conference and sends a secure copy of this session key and a unique port base (every client in a particular conference is assigned a unique port base for sending audio and video streams) to the client.

4. If the request is for joining an existing conference the server sends the session key of the conference and port base to the client and then sends following information:

a. If the user wants to join as active mode

i. The server sends the information (name, IP, port base) of all other participants in the conference to this client.

ii. The server sends the information of this participant to all other clients.

b. If the user wants to join as passive mode (listener only)

i. The server sends the information of all other active participants in the conference to this client.

ii. The server sends the information of this participant to all other active clients.

Leaving the conference

Client sends a left request to the server. Server disconnects the connection to this client and does one of the following:

1. If this user was the last user in the conference then server close the conference and remove the conference from the list of conference.

2. If the user was an active participant the server sends the information about leaving the conference by this client to all other participants in the conference.

3. If the user was a passive participant the server sends the information about leaving the conference by this client to all other active participants in the conference.

Problems in video conferencing:

Some observers argue that three outstanding issues have prevented videoconferencing from becoming a standard form of communication, despite the ubiquity of videoconferencing-capable systems.These issues are:1. Eye Contact:

Eye contactplays a large role in conversational turn-taking, perceived attention and intent, and other aspects of group communication. While traditional telephone conversations give no eye contact cues, many videoconferencing systems are arguably worse in that they provide an incorrect impression that the remote interlocutor is avoiding eye contact. Some Tele-presence systems have cameras located in the screens that reduce the amount ofparallaxobserved by the users. This issue is also being addressed through research that generates a synthetic image with eye contact using stereo reconstruction. Telcordia Technologies, formerly Bell Communications Research, owns a patent for eye-to-eye videoconferencing using rear projection screens with the video camera behind it, evolved from a 1960s U.S. military system that provided videoconferencing services between theWhite Houseand various other government and military facilities. This technique eliminates the need for special cameras or image processing. 2. Appearance Consciousness:

A second psychological problem with videoconferencing is being on camera, with the video stream possibly even being recorded. The burden of presenting an acceptable on-screen appearance is not present in audio-only communication. Early studies by Alphonse Chapanisfound that the addition of video actually impaired communication, possibly because of the consciousness of being on camera.

3. Signal latency:

The information transport of digital signals in many steps needs time. In a telecommunicated conversation, an increasedlatencylarger than about 150300 ms becomes noticeable and is soon observed as unnatural and distracting. Therefore, next to astable large bandwidth, a smalltotal round-trip timeis another major technical requirement for the communication channel for interactive videoconferencing. The issue of eye-contact may be solved with advancing technology, and presumably the issue of appearance consciousness will fade as people become accustomed to videoconferencing.

Social and institutional impact:

Impact on the general public:

High speed Internet connectivity has become more widely available at a reasonable cost and the cost of video capture and display technology has decreased. Consequently, personal videoconferencing systems based on a webcam, personal computer system, software compression and broadband Internet connectivity have become affordable to the general public. Also, the hardware used for this technology has continued to improve in quality, and prices have dropped dramatically. The availability of freeware (often as part ofchat programs) has made software based videoconferencing accessible to many.

For over a century,futuristshave envisioned a future where telephone conversations will take place as actual face-to-face encounters with video as well as audio. Sometimes it is simply not possible or practical to have face-to-face meetings with two or more people. Sometimes a telephone conversation or conference call is adequate. Other times,e-mailexchanges are adequate. However, videoconferencing adds another possible alternative, and can be considered when:

a live conversation is needed;

visual information is an important component of the conversation;

the parties of the conversation can't physically come to the same location; the expense or time of travel is a consideration.

Deaf, hard-of-hearingandmuteindividuals have a particular interest in the development of affordable high-quality videoconferencing as a means of communicating with each other insign language. UnlikeVideo Relay Service, which is intended to support communication between a caller using sign language and another party using spoken language, videoconferencing can be used between two signers.

Mass adoption and use of videoconferencing is still relatively low, with the following often claimed as causes:

Complexity of systems: Most users are not technical and want a simple interface. In hardware systems an unplugged cord or a flat battery in a remote control is seen as failure, contributing to perceived unreliability which drives users back to traditional meetings. Successful systems are backed by support teams who can pro-actively support and provide fast assistance when required.

Perceived lack of interoperability: not all systems can readily interconnect, for example ISDN and IP systems require a gateway. Popular software solutions cannot easily connect to hardware systems. Some systems use different standards, features and qualities which can require additional configuration when connecting to dissimilar systems.

Bandwidth and quality of service: In some countries it is difficult or expensive to get a high quality connection that is fast enough for good-quality video conferencing. Technologies such asADSLhave limited upload speeds and cannot upload and download simultaneously at full speed. As Internet speeds increase higher quality and high definition video conferencing will become more readily available.

Expense of commercial systems - a well-designed system requires a specially designed room and can cost hundreds of thousands of dollars to fit out the room with codecs, integration equipment and furniture.

Participants being self-conscious about being on camera, especially new users and older generations.

For these reasons many hardware systems are often used for internal corporate use only, as they are less likely to run into problems and lose a sale. An alternative is companies that hire out videoconferencing-equipped meeting rooms in cities around the world. Customers simply book the rooms and turn up for the meeting - everything else is arranged and support is readily available if anything should go wrong.Impact on sign language communications:

One of the first demonstrations of the ability fortelecommunicationsto help sign language users communicate with each other occurred whenAT&T'svideophone(trademarked as the "Picture phone") was introduced to the public at the1964 New York World's Fairtwo deaf users were able to communicate freely with each other between the fair and another city. Various other organizations, includingBritish Telecom's Martleshamfacility and several universities have also conducted extensive research onsigningvia video telephony.The use of sign language via video telephonywas hampered for many years due to the difficulty of using it overregular analogue phone linescoupled with the high cost of better qualitydata phone lines,factors which largely disappeared with the advent of high-speed ISDN andIP Internetservices in the last decade of the 20th Century.

21st century improvements:

Significant improvements invideo callquality of service for the deaf occurred in the United States in 2003 when Sorenson Media Inc. (formerly Sorenson Vision Inc.), a video compression software coding company, developed its VP-100 model stand-alonevideophonespecifically for the deaf community. It was designed to output its video to the user's television in order to lower the cost of acquisition, and to offer remote control and a powerfulvideo compression codecfor unequaled video quality and ease of use with video relay services. Favorable reviews quickly led to its popular usage at educational facilities for the deaf, and from there to the greater deaf community.

Coupled with similar high-quality videophones introduced by other electronics manufacturers, theavailability of high speed Internet, and sponsored video relay servicesauthorized by the U.S.Federal Communications Commissionin 2002, VRS services for the deaf underwent rapid growth in that country.

Present day usage:

Using such video equipment in the present day, the deaf, hard-of-hearing and speech-impaired can communicate between themselves and with hearing individuals usingsign language. The United States and several other countries compensate companies to provide 'Video Relay Services' (VRS). Telecommunication equipment can be used to talk to others via a sign language interpreter, who uses a conventional telephone at the same time to communicate with the deaf person's party. Video equipment is also used to do on-site sign language translation viaVideo Remote Interpreting(VRI). The relative low cost and widespread availability of3Gmobile phone technology with video calling capabilities have given deaf and speech-impaired users a greater ability to communicate with the same ease as others. Some wireless operators have even started free sign language gateways.

Sign language interpretation services via VRS or by VRI are useful in the present-day where one of the parties isdeaf,hard-of-hearingorspeech-impaired (mute). In such cases the interpretation flow is normally within the same principal language, such asFrench Sign Language(LSF) to spoken French,Spanish Sign Language(LSE) to spoken Spanish,British Sign Language(BSL) to spoken English, andAmerican Sign Language(ASL) also to spoken English (since BSL and ASL are completely distinct to each other), and so on. A deaf or hard-of-hearing person at his workplace using a VRS to communicate with a hearing person in London.

Multilingualsign language interpreters, who can also translate as well across principal languages (such as to and from SSL, to and from spoken English), are also available, albeit less frequently. Such activities involve considerable effort on the part of the translator, since sign languages are distinctnatural languageswith their ownconstruction,semanticsandsyntax, different from the aural version of the same principal language.

With video interpreting, sign language interpreters work remotely with livevideoandaudiofeeds, so that the interpreter can see the deaf or mute party, and converse with the hearing party, and vice versa. Much liketelephone interpreting, video interpreting can be used for situations in which no on-siteinterpretersare available. However, video interpreting cannot be used for situations in which all parties are speaking via telephone alone. VRS and VRI interpretation requires all parties to have thenecessary equipment. Some advanced equipment enables interpreters to control the video camera remotely, in order tozoomin and out or to point the camera toward the party that is signing.

A Video Interpreter (V.I.) assisting an on-screen client.

Impact on education:

Videoconferencing provides students with the opportunity to learn by participating in two-way communication forums. Furthermore, teachers and lecturers worldwide can be brought to remote or otherwise isolated educational facilities. Students from diverse communities and backgrounds can come together to learn about one another, althoughlanguage barrierswill continue to persist. Such students are able to explore, communicate, analyze and share information and ideas with one another. Through videoconferencing students can visit other parts of the world to speak with their peers, and visit museums and educational facilities. Suchvirtual field tripscan provide enriched learning opportunities to students, especially those in geographically isolated locations, and to the economically disadvantaged. Small schools can use these technologies to pool resources and provide courses, such as in foreign languages, which could not otherwise be offered.

A few examples of benefits that videoconferencing can provide in campus environments include:

faculty members keeping in touch with classes while attending conferences;

guest lecturers brought in classes from other institutions;

researchers collaborating with colleagues at other institutions on a regular basis without loss of time due to travel;

schools with multiple campuses collaborating and sharing professors;

faculty members participating in thesis defenses at other institutions;

administrators on tight schedules collaborating on budget preparation from different parts of campus;

faculty committee auditioning scholarship candidates;

researchers answering questions about grant proposals from agencies or review committees;

student interviews with an employers in other cities, and

Tele-seminars.Impact on medicine and health:

Videoconferencing is a highly useful technology for real-timetelemedicineandTele-nursingapplications, such asdiagnosis, consulting, transmission ofmedical images, etc... With videoconferencing, patients may contactnursesandphysiciansinemergencyor routine situations; physicians and otherparamedicalprofessionals can discuss cases across large distances. Rural areas can use this technology for diagnostic purposes, thus saving lives and making more efficient use of health care money. For example, a rural medical center inOhio, United States, used videoconferencing to successfully cut the number of transfers of sick infants to ahospital70 miles (110km) away. This had previously cost nearly $10,000 per transfer.

Special peripherals such asmicroscopesfitted withdigital cameras,video endoscopes,medical ultrasoundimaging devices,otoscopes, etc., can be used in conjunction with videoconferencing equipment to transmit data about a patient. Recent developments inmobile collaboration on hand-held mobile devices have also extended video-conferencing capabilities to locations previously unreachable, such as a remote community, long-term care facility, or a patient's home.

Impact on business:

Videoconferencing can enable individuals in distant locations to participate in meetings on short notice, with time and money savings. Technology such asVoIPcan be used in conjunction with desktop videoconferencing to enable low-cost face-to-face business meetings without leaving the desk, especially for businesses with widespread offices. The technology is also used fortelecommuting, in which employees work from home. One research report based on a sampling of 1,800 corporate employees showed that, as of June 2010, 54% of the respondents with access to video conferencing used it all of the time or frequently.

Videoconferencing is also currently being introduced on online networking websites, in order to help businesses form profitable relationships quickly and efficiently without leaving their place of work. This has been leveraged by banks to connect busy banking professionals with customers in various locations usingvideo bankingtechnology.

Videoconferencing on hand-held mobile devices (mobile collaborationtechnology) is being used in industries such as manufacturing, energy, healthcare, insurance, government and public safety. Live, visual interaction removes traditional restrictions of distance and time, often in locations previously unreachable, such as a manufacturing plant floor a continent away.

Although videoconferencing has frequently proven its value, research has shown that some non-managerial employees prefer not to use it due to several factors, including anxiety. Some such anxieties can be avoided if managers use the technology as part of the normal course of business.

Researchers also find that attendees of business and medical videoconferences must work harder to interpret information delivered during a conference than they would if they attended face-to-face.They recommend that those coordinating videoconferences make adjustments to their conferencing procedures and equipment.

Impact on law:

In the United States, videoconferencing has allowed testimony to be used for an individual who is unable or prefers not to attend the physical legal settings, or would be subjected to severe psychological stress in doing so, however there is a controversy on the use of testimony by foreign or unavailable witnesses via video transmission, regarding the violation of theConfrontation Clause of the Sixth Amendmentof the U.S. Constitution.

In a military investigation inState of North Carolina,Afghanwitnesses have testified via videoconferencing.

In Hall County, Georgia, videoconferencing systems are used for initial court appearances. The systems link jails with court rooms, reducing the expenses and security risks of transporting prisoners to the courtroom.

The U.S. Social Security Administration (SSA), which oversees the largest administrative judicial system in the world, under its Office of Disability Adjudication and Review (ODAR), has made extensive use of video teleconferencing (VTC) to conduct hearings at remote locations. In FY 2009, SSA conducted 86,320 VTC hearings, a 55% increase over FY 2008.In August 2010, the SSA opened its fifth and largest video-only National Hearing Center (NHC), in St. Louis, Missouri. This continues SSA's effort to use video hearings as a means to clear its substantial hearing backlog. Since 2007, the SSA has also established NHCs in Albuquerque, New Mexico, Baltimore, Maryland, Falls Church, Virginia, and Chicago, Illinois.Impact on media relations:

The concept ofpress videoconferencingwas developed in October 2007 by the Pan African Press Association (APPA), a Paris France based non-governmental organization, to allow African journalists to participate in internationalpress conferenceson developmental andgood governanceissues.

Press videoconferencing permits international press conferences via videoconferencing over the Internet. Journalists can participate on an international press conference from any location, without leaving their offices or countries. They need only be seated by a computer connected to the Internet in order to ask their questions to the speaker.

In 2004, theInternational Monetary Fundintroduced the Online Media Briefing Center, a password-protected site available only to professional journalists. The site enables the IMF to present press briefings globally and facilitates direct questions to briefers from the press. The site has been copied by other international organizations since its inception. More than 4,000 journalists worldwide are currently registered with the IMF.Future of video conferencing:

The most important development in videoconference will be the result of increased bandwidth on Internet based communications. To date, Internet video conferencing products such as Microsofts NetMeeting and White Pines CuSee Me have allowed restricted communications over the Internet with restricted quality video and audio. However this is about to change. These products, and an increasing number of new ones, are about to benefit from the increased bandwidth that ADSL, satellite, optical and radio are going to offer. The net result will be the integration of the two solutions.

These developments will offer educationalists a wealth of new opportunities that will include: Lower connection charges the costs of ISDN connection charges will disappear as the connections will be via the Internet. At worst, these will be local telephone charges. This will dramatically reduce on line charges especially in International videoconferences where, using ISDN, two- six International calls are charged. As the Internet offers Global connectivity, users will no longer have to worry about which country they connect with as the connection charge will be uniform. Multipoint videoconferencing will be as cost effective as point to point using Internet reflector sites. Education Authorities should now be assessing the future development and hosting of such reflectors with a view to offering password protected safe videoconferencing areas to education. The potential of Distance Learning is enormous as the bandwidth increases, offering schools the opportunity to offer elective or required courses for which certified teachers are not available or in situations where student numbers are not sufficient to hire a full time teacher in one campus. Videoconference solutions will be easily integrated into existing PCs allowing communication across an existing LAN, WAN or the Internet. This will allow organizations to implement a flexible solution to internal and external communications using the same equipment. The development of GPRS and UMTS will further enhance communications by allowing the integration of mobile telephony into the system. Groups on a field trip will be able to videoconference with students back at school or with any group of learners anywhere in the World. This flexibility, coupled with the advantages of application sharing and collaborative software will offer an unprecedented degree of communication.Thanks to the new bandwidth offered by these new technologies, the WWW will dramatically change in communication style. Video and multimedia are becoming more widespread on the Internet. Video, as a medium, will become more common place as developers utilize it in sites. Web TV and radio will be areas where schools and educators can disseminate information. Whoever said video is dead did not read the small print.Summary:

Video conferencing applications can be classified according to a matrix of network type and number of participants. The network types are primarily LAN (Local Area Network, i.e., Internet connectivity) and ISDN (phone system); the participant number is person-to-person or group. Often the determination of whether to videoconference with a remote site must be based on whether that site uses H.323 (LAN) or not. ISDN is still very prevalent at commercial sites and even in some educational sites.

While desktop video conferencing itself may not be sufficiently mature to warrant investment of large amounts of resources for wide deployment, desktop collaboration is evolving as a valuable tool for the workplace. Desktop video still may be very appropriate in some cases. A person who is considering use of desktop videoconferencing needs to be aware of the current shortfalls imposed by network bandwidth restrictions and the impact of other network usage. In general, a site that uses Internet 2 will not experience the same network congestion as a site on the commercial Internet. This is an area that is continuing to evolve. Early adopters may gain valuable experience, but also may need to make equipment and hardware upgrades regularly to take advantage of advances in the field.Video conferencing is now certainly growing very rapidly. It saves significant amount of money in terms of both travel and time. During the last decade availability of the equipments and internet connections at the reasonable cost make it affordable to a much wider group of users. It is now widely used in the fields of education, collaborative work between researchers and business communities, telemedicine and in many other fields. On the other hand recent terrorist activities have forced many organizations and government agencies to re-examine their existing security policies and mechanisms. The government agencies must now carefully safeguard their sensitive data transmission, for example transmission of voice, video and other data during videoconference meetings.

Reference:www.wikipedia.orgwww.global-leap.org/newspaper/vc_case_studies_summary_report.pdfwww.webex.co.in Large BusinessOverviewIntro to WebExwww.webopedia.com/TERM/V/videoconferencing.htmlwww-td.fnal.gov/atwork/conference/ib2_tutorial/Tutorial.doc

www.facweb.iitkgp.ernet.in/~jay/slides/vid_conf_ws.ppt


Top Related