
MVTX2603
Data Sheet
27
Zarlink Semiconductor Inc.
These modes are selected by QOSC40 [7:6] and QOSC48 [7:6] for the first and second gigabit ports, respectively.
The default configuration for a 10/100 Mbps port is three delay-bounded queues and one best-effort queue. The
delay bounds per class are 0.8 ms for P3, 2 ms for P2, and 12.8 ms for P1. For a 1 Gbps port, we have a default of
six delay-bounded queues and two best-effort queues. The delay bounds for a 1 Gbps port are 0.16 ms for P7 and
P6, 0.32 ms for P5, 0.64 ms for P4, 1.28 ms for P3, and 2.56 ms for P2. Best effort traffic is only served when there
is no delay-bounded traffic to be served. For a 1 Gbps port, where there are two best-effort queues, P1 has strict
priority over P0.
We have a second configuration for a 10/100 Mbps port in which there is one strict priority queue, two delay
bounded queues, and one best effort queue. The delay bounds per class are 3.2 ms for P2 and 12.8 ms for P1. If
the user is to choose this configuration, it is important that P3 (SP) traffic be either policed or implicitly bounded
(e.g., if the incoming P3 traffic is very light and predictably patterned). Strict priority traffic, if not admission-
controlled at a prior stage to the MVTX2603, can have an adverse effect on all other classes’ performance. For a 1
Gbps port, P7 and P6 are both SP classes and P7 has strict priority over P6. In this case, the delay bounds per
class are 0.32 ms for P5, 0.64 ms for P4, 1.28 ms for P3, and 2.56 ms for P2.
The third configuration for a 10/100 Mbps port contains one strict priority queue and three queues receiving a
bandwidth partition via WFQ. As in the second configuration, strict priority traffic needs to be carefully controlled. In
the fourth configuration, all queues are served using a WFQ service discipline.
7.3 Delay Bound
In the absence of a sophisticated QoS server and signaling protocol, the MVTX2603 may not know the mix of
incoming traffic ahead of time. To cope with this uncertainty, our delay assurance algorithm dynamically adjusts its
scheduling and dropping criteria, guided by the queue occupancies and the due dates of their head-of-line (HOL)
frames. As a result, we assure latency bounds for all admitted frames with high confidence, even in the presence of
system-wide congestion. Our algorithm identifies misbehaving classes and intelligently discards frames at no
detriment to well-behaved classes. Our algorithm also differentiates between high-drop and low-drop traffic with a
weighted random early drop (WRED) approach. Random early dropping prevents congestion by randomly dropping
a percentage of high-drop frames even before the chip’s buffers are completely full, while still largely sparing low-
drop frames. This allows high-drop frames to be discarded early, as a sacrifice for future low-drop frames. Finally,
the delay bound algorithm also achieves bandwidth partitioning among classes.
P3
P2
P1
P0
Op1 (default)
Delay Bound
BE
Op2
SP
Delay Bound
BE
Op3
SP
WFQ
Op4
WFQ
Table 7 - Four QoS Configurations for a 10/100 Mbps Port
P7
P6
P5
P4
P3
P2
P1
P0
Op1 (default)
Delay Bound
BE
Op2
SP
Delay Bound
BE
Op3
SP
WFQ
Op4
WFQ
Table 8 - Four QoS Configurations for a Gigabit Port