Date post: | 17-Jan-2016 |
Category: |
Documents |
Upload: | collin-clark |
View: | 221 times |
Download: | 0 times |
Supporting Multimedia Communication over a Gigabit Ethernet NetworkVARUN PIUS RODRIGUES
Need for Gigabit Networks
Streaming Multimedia
High performance distributed computing
Virtual Reality
Distance learning
Development and Design Challenges
Hardware Components:
Gigabit Network Interface Card
A buffered hub
Gigabit routing switch
File Server within LAN: Building pilot workgroups within LAN
Integration of gigabit switches
Design Challenges:
Simplicity
End-to-end solutions
Extended to include emerging multimedia applications
Design Features of Gigabit NIC
802.3 Frame compatibility
Design challenge of the MAC ASIC to operate the GNIC
Reducing host CPU utilization through Descriptor-based DMA
IEEE 802.3z Standard
Standard for MAC and PHY layers
PHY Layer:
Fiber: 1000 BASE-SX (multi-mode) and 1000 BASE-LX (single-mode)
Copper-based: 1000 BASE-CX (twinax cable)
MAC Layer:
Identical to one defined for 10 Mbps and 100Mbps Ethernet
Fields: DA, SA, LEN, DATA, FCS
Designing GNIC
Architecture Consideration: 64-bit 66 MHz GNIC-II
Theoretical bandwidth: 4 Gbps vs 1 Gbps
Practical bandwidth: 3 Gbps vs 800 Mbps
Consists of:
Application-specific Integrated Circuit (ASIC)
Packet buffer memory
Serializer/Deserializer chip
Physical layer components
ASIC
Consists of: PCI
Pair of DMA for controllers: for Rx and Tx
Pair of FIFO connected to external FIFO interface
GNIC
Reducing CPU utilization:
Accesses host memory directly through Descriptor-based DMA
Transfer Chaining: transferring arbitrary nos of packets from host memory to GNIC
Adaptability of interrupt rate of host to network load
Design Features of Buffered Gigabit Hub
Full-duplex: for eliminating CSMA/CD collision
Congestion Control: to avoid frame dropping
Round-Robin scheduling: to prevent “packet clumping”
Performance issues with CSMA/CD
Performance highly dependent on ratio of propagation delay to average packet transmission time
Two ways to solve it:
Increase minimum packet length
Decrease length
Length increased through virtual collisions
Full Duplex Repeater
Achieves switching and shared design concepts for switch-like performance while maintaining the cost of shared hub
Provides maximum throughput, collisionless forwarding, and congestion control
Logical flow of frames: Input -> Forwarding path -> Output
Input: Passes though buffer PHY and MAC before being queued in buffer; congestion control informs end system to slow down incase capacity of buffer is about to be reached
Forwarding path: Implements round robin scheduling to determine which port will send data
Output: includes a buffer, MAC and PHY with congestion notification
Design Features of Gigabit Routing Switch
Architecture issues
Parallel access shared memory architecture
Priority Queue Design
Architecture of Gigabit Routing Switch PE-4884
Consists of 12 channel cards, 1 EMM card for the chassis to function and 1 EMM card management, policy and routing table redundancy
Channel cards: Supports connectivity to Ethernet, fast Ethernet, FDDI etc Sending and receiving data through physical interfaces and system packet
memory
Performing routing and switching address lookups
Enforcing layer 4 policies; collecting management statistics, etc
Every channel card connected to central memory through 2 full-duplex gigabit channels
Contd…
Architecture of Gigabit Routing Switch PE-4884
Memory Architecture:
Uses parallel memory architecture for high speed performance
Limitations of cross-bar architecture:
Port-based memory
Head-of-line blocking
Difficult to provide QoS support
Shared memory bus architecture overcomes these limitations
Packet flow:
Address Resolution Logic ASIC on channel card evaluates the destination address
ARL signals memory control cards that a packet must be sent to which port
Frame Management
Supporting Distributed Multimedia Applications
Challenges in On-Demand Video:
Large data size
Real-time constraint
Supporting concurrent accesses
Consideration for connection setup:
Implementation of RSVP scheme
New emerging standards such as IEEE 802.1p and 802.1Q
Integrated solutions with policy-based QoS
Experimental Results
Two major goals:
To test how a gigabit LAN performed in the basic metrics of throughput
To examine how concurrent video delivery is supported over gigabit LAN
Experiments designed to determine bottlenecks in current system and identify bounds on performance
End-to-end performance is evaluated
Experimental Setup
Hardware setup:
Pentium II 233 Mhz and Pentium Pro 300 Mhz
Running either Linux or NT
Benchmarking utilities: netperf (for max throughput) and netbench (for avg throughput)
Netperf:
Consists of 2 process: netserver and netperf
Netperf connects to remote system running an instance of netserver and uses control connection to send parameters
TCP connection using BSD sockets
Separate connection for measurement
Contd..
Experimental Setup
Netbench:
Measures how well a file server handles I/O requests
Each client tallies how many data moves to and from server
Results
Maximum throughput results:
Message packet size varied from 2048 bytes to 4 Mbytes.
After 16KByte packet size is reached, there is slight drop in performance and then again increase to give peak throughput of 190Mbps
Performance peak in Linux system; NT system peak throughput around 90Mbps
To investigate, NT experiment performed on different machine; peak throughput of 180 Mbps attained
SUN’s Solaris attained peak performance of 488 Mbps
Raw device testing attained performance of 700-800 Mbps at hardware level
Average throughput results:
Maximum server throughput of 157 Mbps with 3 clients
High Quality Streaming Videos
Criteria for experiment to measure concurrent access for on-demand video:
Buffering Scheme:
Used 2 buffer scheme at server: one to retrieve video frames from server system and other to transmit it to client
Client also uses 2 buffer scheme: one for the network and other to display the frame
Performance metrics for the measurement:
Need to determine maximum number of concurrent accesses that can be supported by the network
Need to calculate jitters (number of miss deadline retrievals)
Results: On-Demand Video Streaming
Range specifications of MPEG-2 between 4 and 32 Mbps were emulated
Number of active processes were increased until system could no longer provide acceptable QoS (jitter less than 1%)
Due to bus contention, 128 Mbps achieved for 32 streams of 4 Mbps videos
Buffer size is another performance bottleneck in addition to network throughput
Questions?
You may send me a mail on my UF mail in case of any issues you need to communicate with me
UF mail: [email protected]