
The desired latency for voice packets across a single
FDDI is also set to be approximately 15 ms (99% of
packets) and of the order of 10 ms in the average.
7.4.3: Data : Interactive
This traffic consists of short packets (500 bytes)
arriving at random with low response time requirements of
50 ms (99% of packets) and 25 ms in the average.
7.4.4: Data -File transfer
This is modeled as file data to/from Ethernet hosts with
the FDDI being used as a backbone to Ethernet clients.
The file length is assumed to be uniformly distributed
between 1500 and 25000 bytes. Each file on average
consists of 8 packets of 1500 bytes + 256 bytes headers,
each arriving at 10 Mbps once the file transfer is initiated.
The file inter-arrival time is exponential. The offered
load is 3.6 Mbps.
7.4.5: Imaging : low-end (workstation)
This traffic source was used in some of the simulation
runs.
This is modeled as images coming off Ethernet hosts
into the FDDI host at 10 Mbps. The image size
distribution is uniform over 1.25 - 5 Mbytes. This image
stream is packetized into maximum length FDDI packets
(4096 + 256 bytes). The image inter-arrival time is varied
and a default value of 20% on-time and 80% off-time is
assumed. This assumes that a host is busy with imaging
only 20% of the time. Thus the peak offered load is 10
Mbps, but the average offered load is 2 Mbps.
The maximum acceptable delay in transmitting an
image and receiving it at the receiver is 1 s (99% of
packets) and 0.5 s average delay. A buffer size of 10
packets is used. Any over-flow leads to packet dropping.
7.4.6: Imaging : High-end (host)
This traffic source is used to simulate the impact of a
very bursty load on the network. The image distribution is
uniform over 1.25 to 5.625 Mbytes. A single image
stream consists of regularly arriving maximum sized
packets (4096 + 256 bytes). The average image inter
arrival rate is 0.32768 ms. The peak offered load is
106.25 Mbps and the average offered load is 10 Mbps.
8: Results
The following sections summarize the results of the
simulations.
8.1: Case 1- Asynchronous only network
In an asynchronous only network, with no synchronous
bandwidth allocated or used, the voice and video are
treated as data. No separate queue is allocated on transmit
or receive. In such a network also it is possible to
maintain a bound on the delay suffered by the packets.
The following observations refer to figures 4 to 19.
8.1.1: Effect on 99% latencies
Due to the unpredictable nature of the traffic
(asynchronous and bursty), the delay cannot be tightly
bounded. As can be seen from the figures 8 and 10, the
99% latencies suffered by video packets is as high as 48
ms when the network is not overloaded but running close
to capacity (90% load). When the network is overloaded,
the latencies can be as high as 252 ms.
In a more typical environment, where the traffic does
not consist of such high burst sources (imaging at 100
Mbps), it is possible to obtain low latencies. We were
able to verify this in our simulation (see figure 6) where in
an asynchronous only network, with 86% network
loading, and a TTRT of 24 ms voice and video packet
latencies were restricted to under 15 ms.
8.1.2: Effect of TTRT on latencies
In an overloaded network the higher the TTRT value,
lower the latencies. In the 8 to 24 ms range, it was
observed that the 24 ms TTRT value consistently offered
lower delays for all traffic types when the ring was large.
(figure 8 and 10).
8.1.3: Effect of ring size
Increasing ring latencies had an adverse effect on the
packet latencies. This was reflected in the increase in the
mean and maximum latencies for voice and video (figure
6 and 10). This effect was less noticeable in the 99%
latencies.
8.1.4: Effect of buffer sizes
Buffer sizes were allocated to the individual queues,
asynchronous and synchronous at different stations.
Therefore the asynchronous buffer size at the imaging
station was varied from 10 to 1000 packets, at the file
server stations it was 50 packets, and at the interactive
terminals it was 10 packets. Every synchronous station
had a 10 packet buffer.
These buffers never overflowed except in the overload
scenario. Then too, the blocking or buffer overflow was
occurring only at the imaging station. This result is
intuitive because the imaging stations were offering
instantaneous overload. This burst would fill up the
buffers and since the network was not faster than the
application, the buffers would not empty out fast enough.
At heavy loads these buffers are unable to empty and
hence lead to overflow. Low burst-rate applications such