+ All Categories
Home > Documents > Network coding approaches for QOS

Network coding approaches for QOS

Date post: 20-Feb-2023
Category:
Upload: independent
View: 0 times
Download: 0 times
Share this document with a friend
27
م ي ح ر ل ا ن م ح ر ل له ا م الس بSudan University of Science and Technology Post Graduate Studies Depart of Electronics Engineering NETWORK CODING APROACHES FOR QOS Prepared by: Nahla Awad ELkarim Supervised by Dr.Ibrahim Khider
Transcript

م ب��سم ال�له ال�رح�من ال�رح�ي�

Sudan University of Science andTechnology

Post Graduate Studies

Depart of Electronics Engineering

NETWORK CODING APROACHES FOR QOS

Prepared by:Nahla Awad ELkarim

Supervised byDr.Ibrahim Khider

Dec 2010

- Abstract:

At present, video delivery over the wiredor wireless packet-switched networks has beenmotivating a series of researches into video encoding,quality of service (QoS), multimedia communication, andservice models of networks. However, most ofresearchers lack a suitable video delivery simulationplatform closer to the real video delivery system.Different from other existing simulation mechanisms, asimulation mechanism is proposed in this paper, basedon an existing simulation tool, NS-2. With the proposedmechanism, the video encoder/decoder component, theapplication layer QoS control component, and thesimulated network have been fully integrated. Thisguarantees that the simulation procedures outside thesimulated network like video encoding/decoding, do notaffect simulation results. Finally, experimentalresults consolidate the fact that this mechanism isuseful for video researchers when they evaluate newencoding algorithms and transmission protocols of videodelivery.

- Key word: Qos, video code, PSNR.

- Acknowledgement: I would like to express my gratitude tothe supervisor of this project Dr.Ibrahim Khider who gave me lots of valuable guidanceand comments for this work. Special thanks to myparents for their supporting of my studying. Inaddition, many thanks Dr. Jacquline John and Eng.Mohammed Ibrahim abodiya at Sudatel Company ,sudani . I would like to thank my friend Samah andthank all my colleagues at the Sudan University of

Science and Technology in the college Graduate Studiesdepart telecommunications their help. Last but not least, I would like to show mywarm thanks to my all friends inside of Sudan andoutside of Sudan.

CONTENTS

Introductions…………………………………………………1

Video Codes………………………………………………….2

Quality of Service (QOS)……………………………………3

Frame Work and Design…………………………………….4

Simulations and Results…………………………………….5

Conclusion and further Research …………...………………6

References ……………………………………….…………7

1-INTRODUCTION:

With the development of digital videodelivery and packet switched networks, new algorithmsor protocols have been proposed and improved, includingthose in video encoding ,application layer QoS controlor FEC protection, continuous media distributionservices, streaming servers, media synchronizationmechanisms, and video transmission [1],[2]. Researchersusually do much simulation experiments to evaluate theperformance of the new algorithms and protocols duringtheir research. It is difficult, however, to handle allthe parts of the whole complicated simulationenvironment for video delivery, namely the video codec,the QoS control, and the network transmission. Most ofresearchers usually have to take a trade-offmethodology. For example, to verify the performance ofvideo error resilience algorithms, they use simplepackets loss models such as an uniform distributionmodel to replace packet loss as a result of trafficcongestion or bit error. On the other hand, the peoplewho are interested in video transmission protocols,perhaps take a fake video data generator rather than areal constant bit-rate (CBR) or variable bit-rate (VBR)video bit stream. That is to say, these simulationexperiments are simplified versions of a real videodelivery system. Moreover, it is also noticed that thePSNR parameter of reconstructed video at a receivercannot be provided, which is important for videostreaming research. Therefore, the construction of avideo delivery simulation framework is meaningful andhelpful for researchers.

Usually, a video delivery simulationframework contains a video encoder (a video datasender), a simulated network, a video decoder (a videodata receiver), and other communication components.. Wepresent here its application for MPEG-4 as example.EvalVid is targeted for researchers who want toevaluate their network designs or setups in terms ofuser perceived video quality. The tool-set is publiclyavailable The NS-2 popular network simulation.

2-Video Code: Source Encoding:

Before transmitting a source image or videosequence to the destination, a video codec, such asH.264/AVC [3], is used for encoding the input to abitstream.Scalability provided by H.264/AVC.The latest videocoding standard, H.264/AVC [3], approved by ITU-T asRecommendation H.264 as International Standard (MPEG-4part 10) Advanced Video Coding (AVC), provides enhancedcompression performance and network-friendly video

representation for both conversational (e.g., videotelephony) . The codec supports the ‘data partitioning’mechanism which allows the syntax of each video frameslice to be separated into up to three different datapartitions for transmission, depending on thecategorization and relevance of the syntax elements .Since more important information is encoded in the basedata partition, it has higher priority [3]. Thus,decoding the base partition, results in basic videoquality. Decoding the enhanced data partitions allowsthe refinement of the received video quality, dependingon factors such as prevailing channel conditions,network congestion.

However, H.264/AVC only provides limitedscalability using the above-mentioned data partitioningmechanism. The codec is inadequate for supportingcertain real-time video services such as prioritizedmulti-stream video coding and transmission. Therefore,in order to provide better scalability for supportingreal-time video applications, Scalable Video Coding(SVC) is currently being developed as an extension tothe existing H.264/AVC . The following sectionspecifies the different scalabilities provided by theSVC extension to the existing H.264/AVC [3]. Moving Picture Experts Group (MPEG) decideto work together as a Joint Video Team (JVT), and tocreate a single technical design called hybrid videocoding standard H.264/MPEG-4 Advanced Video Coding(AVC) for a new part of the MPEG-4 standard. Thissyntax of H.264/AVC typically permits a significantreduction in bit rate compared to all previousstandards such as ITU-T Rec.H.263 at the same quality

level. Furthermore, the JVT agreed to finalize theScalable Video Coding (SVC) project as an amendment ofthe H.264/MPEG4-AVC standard, for which the scalableextension of H.264/MPEG4-AVC was selected as the firstworking draft. We refer to scalability as afunctionality that allows the removal of parts of thebit-stream while achieving a reasonable codingefficiency of the decoded video at reduced temporal,Signal-to-Noise Ratio (SNR) or spatial resolution. Eachbitstream consists of a base layer and one or moreenhancement layers that are nested. The base layerincludes visually more important data from theperspective of the human visual system. SVC can bemuch more preferable due to the flexibility of videostream control over highly fluctuated link channelcapacity. Therefore, there exists a break-even point oftransmission efficiency between the two codingschemes.

3-Quality of service(QOS):

- What is QoS ?

Quality of Service (QoS) in cellular networksis defined as the capability of the cellular serviceproviders to provide a satisfactory service whichincludes voice quality, signal strength, low callblocking and dropping probability, high data rates formultimedia and data applications etc. For network basedservices QoS depends on the following factors :

Throughput The rate at which the packets go throughthe network. Maximum rate is always preferred.

Delay This is the time which a packet takes totravel from one end to the other. Minimum delay isalways preferred.

Packet Loss Rate The rate at which a packet islost. This should also be as minimum as possible.

Packet Error Rate This is the errors which arepresent in a packet due to corrupted bits. Thisshould be as minimum as possible[4].

Classical QoS provisioning involves keepingparticular groups of this performance metrics withincertain limits, in order to offer the user reasonablequality levels. The problem with this approach isthat in today’s Internet, the heterogeneous featuresof current services make it difficult, sometimes evenimpossible to clearly identify the relevant set ofperformance parameters for each case. Even more, thequality experienced by a user of the new multimediaservices not only depends on network features butalso on higher layers’ characteristics [2](multimedia coding and compression, nature of thecontent ,etc...). In this sense, a final user mayexperience acceptable quality levels even in thepresence of severe network degradation . These observations show that rating thequality of the new multimedia services from thenetwork’s side may no longer be effective. The userperceived quality of service (PQoS) field addresses thisproblem, assessing the quality of a service asperceived by the end-user. The assessment ofperceived quality in multimedia services can be

performed either by subjective or objective methodologies..Subjective methods represent the most accurate metricas they present a direct relation with the user’sexperience. These methods consist in the evaluationof the average opinion that a group of people assignto different audio and video sequences in controlledtests. Different recommendations standardize the mostused subjective methods in audio and video .

In network’s context, intrusive means theinjection of extra data (audio and/or videosequences)to perform the measurement. Intrusivemethods are based on the comparison of twosequences, a reference sequence(original) and adistorted sequence (i.e. the one modified duringnetwork transmission). This comparison is generallyperformed either in the time/space domain (simplesample comparison: mean square error (MSE), signalto noise ratio(SNR) or peak signal to noise ratio(PSNR) ) or in the perception domain. using models ofthe human senses to improve results.

4-Framework and Design:

In Figure 1 the structure of the EvalVidframework is shown. The interactions between theimplemented tools and data flows are also symbolized.In Section 3 it is explained what can be calculated andSection 4 shows how it is done and which results can beobtained.

Fig.1: Scheme of evaluation framework .

Also, in Figure 1, a complete transmission of a digitalvideo is symbolized from the recording at the sourceover the encoding, packetization, transmission over thenetwork, jitter reduction by the play-out buffer,decoding and display for the user. Furthermore thepoints, where data are tapped from the transmissionflow are marked. This information is stored in variousfiles. These files are used to gather the desiredresults, e.g., loss rates, jitter, and video quality.A lot of information is required to calculate thesevalues.The required data are (from the sender side):

– raw uncompressed video.

– encoded video.– time-stamp and type of every packet sent and from thereceiver side:– time-stamp and type of every packet received.– reassembled encoded video (possibly erroneous).– raw uncompressed video to be displayed.

Supported Functionalities:

In this section the parameters calculated bythe tools of EvalVid are described, formal definitionsand references to deeper discussions of the matter,particularly for video quality assessment, are given.

Determination of Packet and Frame Loss:

Packet loss Packet losses are usuallycalculated on the basis of packet identifiers.Consequently the network black box has to provideunique packet id’s. This is not a problem forsimulations, since unique id’s can be generated fairlyeasy. In measurements, packet id’s are often taken fromIP, which provides a unique packet id. The uniquepacket id is also used to cancel the effect ofreordering. In the context of video transmission it isnot only interesting how much packets got lost, butalso which kind of data is in the packets. E.g., theMPEG-4 codec defines four different types of frames (I,P, B, S) and also some generic headers. For details seethe MPEG-4 Standard .Since it is very important forvideo transmissions which kind of data gets lost (ornot) it is necessary to distinguish between thedifferent kind of packets. Evaluation of packet lossesshould be done type (frame type, header) dependent.

Packet loss is defined in Equation 1. It is expressedin percent.

Packet loss PLT=100nTresv

nTsent

………………… [1]

Where T:_ _ Type of data in packet (one of all, header, I, P, B, S).nTresv : _ number of type _ packets receive .

nTsent :_ _ number of type _ packets sent.

Frame loss:

A video frame (actually being a single coded image)can be relatively big. the frame loss rate can bederived from the packet loss rate (packet alwaysmeans IP packet here). But this process depends a bitof the capabilities of the actual video decoder inuse, because some decoders can process a frame evenif some parts are missing and some can’t.Furthermore, wither a frame can be decoded depends onwhich of its packet got lost. If the first packet ismissing, the frame can almost never be decoded. Thus,the capabilities of certain decoders has to be takeninto account in order to calculate the frame lossrate. It is calculated separately for each frametype.

Frame loss FLT=100nTresv

nTsent

……………………….[2]

T : Type of data in frame.nTresv : number of type frames receive . nTsent : number of type frames sent.

Determination of Delay and Jitter:

In video transmission systems not only theactual loss is important for the perceived videoquality, but also the delay of frames and thevariation of the delay, usually referred to as framejitter. Digital videos always consists of frames withhave to be displayed at a constant rate. Displaying aframe before or after the defined time results .Thisissue is addressed by so called play-out buffers.These buffers have the purpose of absorbing thejitter introduced by network delivery delays. It isobvious that a big enough play-out buffer cancompensate any amount of jitter. In extreme case thebuffer is as big as the entire video and displayingstarts not until the last frame is received. Thiswould eliminate any possible jitter at the cost of aadditional delay of the entire transmission time. Theother extreme would be a buffer capable of holdingexactly one frame. In this case no jitter at all canbe eliminated but no additional delay is introduced.There have been sophisticated techniques developedfor optimized play-out buffers dealing with thisparticular trade-off . These techniques are notwithin the scope of the described framework. Theplay-out buffer size is merely a parameter for theevaluation process .This currently restricts theframework to static payout buffers. However, becauseof the integration of play-out buffer strategies intothe evaluation process, the additional loss caused byplay-out buffer over- or under-run scan be considered.The formal definition of jitter as used in thispaper .It is the variance of the inter-packet or

inter-frame time. The “frame time” is determined bythe time at which the last segment of a segmentedframe is received.

packet jitter jp=1N∑

i=1

N¿¿¿ …………………..[3]

where N:number of packet, ¿N

−¿¿: average of inter-packet timesFrame jitter jF=

1N∑

i=1

M¿¿¿ ……………………[4]

N:number of frame , ¿N−¿¿: average of inter-frame times

Video Quality Evaluation:

Digital video quality measurements must bebased on the perceived quality of the actual videobeing received by the users of the digital video systembecause the impression of the user is what counts inthe end. There are basically two approaches to measuredigital video quality, namely subjective qualitymeasures and objective quality measures. Subjectivequality metrics always grasp the crucial factor, theimpression of the user watching the video while theyare extremely costly: highly time consuming, highmanpower requirements and special equipment needed.Such objective methods are described in detail byMPEG[2] . The human quality impression usually is givenon a scale from 5 (best) to 1 (worst). This scale iscalled Mean Opinion Score (MOS). However, the mostwidespread method is the calculation of peak signal tonoise ratio (PSNR) image by image. It is a derivativeof the well-known signal to noise ratio (SNR), whichcompares the signal energy to the error energy. ThePSNR compares the maximum possible signal energy to thenoise energy, which has shown to result in a higher

correlation with the subjective quality perception thanthe conventional SNR Equation 5 is the definition ofthe PSNR between the luminance component Y of sourceimage S and destination image D.

PSNR(n)db=20log( Vpacket

√ 1NcolNrow

∑i=0

Ncol

∑j=0

Nrow

(Ys(n,i,j)−YD(n,i,j))2 ) ………….[5] Vpacket=2

k−1.K=number of bits per pixel.The part under the fraction stroke is nothing but themean square error (MSE). Thus, the formula for the PSNRcan be abbreviated as:PSNR=20LogVpacket

MSE .

See Since the PSNR is calculated frame by frame it canbe inconvenient, when applied to videos consisting ofseveral hundred or thousand frames. Furthermore, peopleare often interested in the distortion introduced bythe network alone. So they want to compare the received(possibly distorted) video with the undistorted2 videosent. This can be done by comparing the PSNR of theencoded video with the received video frame by frame orcomparing their averages and standard deviations.percentage of frames with a MOS worse than that of thesent (undistorted) video.

Tools:

This section introduces the tools of theEvalVid framework, describes their purpose and usage

and shows examples of the results attained. Furthermoresources of sample video files and codecs are given.

Files and Data Structures:

At first a video source is needed. Raw(uncoded) video files are usually stored in the YUVformat, since this is the preferred input format ofmany available video encoders. Such files can beobtained from different sources, as well as free MPEG4codecs. Sample videos can also be obtained from theauthor. Once encoded video files (bit streams) exist,trace files are produced out of them.

These trace files contain all relevantinformation for the tools of EvalVid to obtain theresults discussed in part . The evaluation toolsprovide routines to read an write these trace files anduse a central data structure containing all theinformation needed to produce the desired results. Theexact format of the trace files, the usage of theroutines and the definition of the central datastructure are described briefly in the next section andin detail in the documentation.

VS - Video Sender:

For MPEG-4 video files, a parser wasdeveloped based on the MPEG-4 video standard simpleprofile and advanced simple profile are implemented.This makes it possible to read any MPEG-4 video fileproduced by a conforming encoder. The purpose of VS isto generate a trace file from the encoded video file.Optionally, the video file can be transmitted via UDP

(if the investigated system is a network setup). Theresults produced by VS are two trace files containinginformation about every frame in the video file andevery packet generated for transmission .

ET - Evaluate Traces:

The heart of the evaluation framework is aprogram called ET (evaluate traces). Here the actualcalculation of packet and frame losses and delay/jittertakes place. For the calculation of these data only thethree trace files are required, since there is allnecessary information included to perform the loss andjitter calculation, even frame/packet type based. Thecalculation of loss is quite easy, considering theavailability of unique packet. With the help of thevideo trace file, every packet gets assigned a type.Every packet of this type not included in the receivertrace is counted lost. The type based loss rates arecalculated according to Equation 1. Frame losses arecalculated by looking for any frame, if one of it’ssegments (packets) got lost and which one. If the firstsegment of the frame is among the lost segments, theframe is counted lost. This is because the videodecoder cannot decode a frame, which first part ismissing. The type-based frame loss is calculatedaccording to Equation 2.

FV - Fix Video:

Digital video quality assessment is performedframe by frame. That means that you need exactly asmany frames at the receiver side as at the sender side.This raises the question how lost frames should be

treated if the decoder does not generate “empty” framesfor lost frames [3]. The FV tool is only needed if thecodec used cannot provide lost frames. How lost framesare handled by FV is described in later in thissection. Some explanations of video formats may berequired. Raw video formats Digital video is a sequence ofimages. No matter how this sequence is encoded, if onlyby exploiting spatial redundancy or by also takingadvantage of temporal redundancy (as MPEG or H.263) inthe end every video codec generates a sequence of rawimages (pixel by pixel) which can then be displayed.Normally such a raw images is just a two-dimensionalarray of pixels. Each pixel is given by three colorvalues, one for the red, for the green and for the bluecomponent of its color. It has been shown that thehuman eye is much more sensitive to luminance than tochrominance components of a picture. That’s why invideo coding the luminance component is calculated forevery pixel, but the two chrominance components areoften averaged over four pixels. This halves the amountof data transmitted for every pixel in comparison.There are other possibilities of this so called YUVcoding, for details see . The decoding process of mostvideo decoders results in raw video files in the YUVformat.

PSNR - Quality assessment:

The PSNR is the base of the quality metricused in the framework to assess the resulting videoquality. Considering the preparations from preliminarycomponents of the framework, the calculation of thePSNR itself is now a simple process described by

Equation5. It must be noted, however, that PSNR cannotbe calculated if two images are binary equivalent. Thisis because of the fact that the mean square error iszero in this case and thus, the PSNR couldn’t becalculated according to Equation5. Usually this issolved by calculating the PSNR between the original rawvideo file before the encoding process and the receivedvideo. This assures that there will be always adifference between to raw images, since all modernvideo codecs are lossy.

Mean Opinion Score (MOS):

Since the PSNR time series’ are not veryconcise an additional metric is provided. The PSNR ofevery single frame is mapped to the MOS scale. Nowthere are only five grades left and every frame of acertain grade is counted. This can be easily comparedwith the fraction of graded frames from the originalvideo. The rightmost bar displays the quality of theoriginal video as a reference, “few losses” means anaverage packet loss rate of 5%, and the leftmost barshows the video quality of a transmission with a packetloss rate of 25%. The impact of the network isimmediately visible and the performance of the networksystem can be expressed in terms of user perceivedquality. Figure 2 shows how near the quality of acertain video transmission comes to the maximalachievable video quality[2].

Figure 2: example of MOS graded video.

5-Simulations:

This tool-set has been used to evaluate videoquality for various simulations and measurements .It proved usable and quite stable.Exemplary results are shownhere and described briefly.

I simulate the MPEG-4 video transmission overa wireless link using different scheduling policies anddropping deadlines. This format for display output and different result.

Can use YUVviewer.exe to see the video sequences:1-An example video transmission over wireless

without QOS(not policy).2-An example video transmission over wireless

approach QOS (CONTROL)

Original image

Frame rate PSNR db(withoutQOS)

PSNR db(QOS)

400 26.05 31.18420 24.05 30.18440 24.36 30.18

6-Conclusion and further Research:

The EvalVid framework can be used to evaluatethe performance of network setups or simulationsthereof regarding user perceived application quality.Furthermore the calculation of delay, jitter and lossis implemented. The tool-set currently supports MPEG-4video streaming applications but it can be easilyextended to address other video codecs or even otherapplications like audio streaming. Certain quirks ofcommon video decoders (omitting lost frames), whichmake it impossible to calculate the resulting quality,are resolved. A PSNR-based quality metric is introducedwhich is more convenient especially for longer videosequences than the traditionally used average PSNR[2].

The tools of the EvalVid framework arecontinuously extended to support other video codecs asH.263, H.26L and H.264 and to address additional codecfunctionalities. Furthermore the support of dynamicplay-out buffer strategies is subject of futuredevelopments. Also it is planned to add support ofother applications, e.g. voice over IP (VoIP) andsynchronized audio-video streaming. And last but notleast other metrics than PSNR based will be integratedinto the EvalVid framework.

In this paper, we proposed a simulationmechanism used in the video transmission simulationframework for the evaluation of video deliveryalgorithms. Through this mechanism of the simulationframework, the video codec works in parallel with theapplication QoS control and the simulated network, NS-2. Moreover, this mechanism can be extended into othersimulation tools that use the similar event triggerscheduler[1].

In the future work, we are going to handlemultiple video codecs in this framework to adopt formore complicated video transmission scenarios.

7-References:

[1]Zhiwei Yan, Guizhong Liu, Rui Su, Qing Zhang,Xiaoming Chen, Lishui Chen A Simulation Mechanism forVideo Delivery Researches, National Chiao TungUniversity ,April 2010.[2] Jirka Klaue, Berthold Rathke, and AdamWolisz ,EvalVid - A Framework for Video Transmissionand Quality Evaluation, Technical University of Berlin2003.[3] VIRAJ SUDHIR AMBETKAR, proving QOS real-time videoservices of multi-hop wire less networks,www.pdfgeni.com, Wright State University, 2005.[4] Dushyanth Balasubramanian, Qos cellular network,Washington University in Saint,2006.


Recommended