
MVTX2604
Data Sheet
23
Zarlink Semiconductor Inc.
The Transmission DMA (TxDMA) is responsible for multiplexing the data and the address. On a port’s turn, the
TxDMA will move 8 bytes (or up to the EOF) from memory into the port’s associated TxFIFO. After reading the EOF,
the port control requests a FCB release for that frame. The TxDMA arbitrates among multiple buffer release
requests.
The frame is transmitted from the TxFIFO to the line.
3.2 Multicast Data Frame Forwarding
After receiving the switch response, the TxQ manager has to make the dropping decision. A global decision to drop
can be made, based on global FDB utilization and reservations. If so, then the FCB is released and the frame is
dropped. In addition, a selective decision to drop can be made, based on the TxQ occupancy at some subset of
the multicast packet’s destinations. If so, then the frame is dropped at some destinations but not others and the
FCB is not released.
If the frame is not dropped at a particular destination port, then the TxQ manager formats an entry in the multicast
queue for that port and class. Multicast queues are physical queues (unlike the linked lists for unicast frames).
There are 2 multicast queues for each of the 24 10/100 ports. The queue with higher priority has room for 32 entries
and the queue with lower priority has room for 64 entries. There are 4 multicast queues for each of the two Gigabit
ports. The size of the queues are: 32 entries (higher priority queue), 32 entries, 32 entries and 64 entries (lower
priority queue). There is one multicast queue for every two priority classes. For the 10/100 ports to map the 8
transmit priorities into 2 multicast queues, the 2 LSB are discarded. For the gigabit ports to map the 8 transmit
priorities into 4 multicast queues, the LSB are discarded.
During scheduling, the TxQ manager treats the unicast queue and the multicast queue of the same class as one
logical queue. The older head of line of the two queues is forwarded first.
The port control requests a FCB release only after the EOF for the multicast frame has been read by all ports to
which the frame is destined.
3.3 Frame Forwarding To and From CPU
Frame forwarding from the CPU port to a regular transmission port is nearly the same as forwarding between
transmission ports. The only difference is that the physical destination port must be indicated in addition to the
destination MAC address.
Frame forwarding to the CPU port is nearly the same as forwarding to a regular transmission port. The only
difference is in frame scheduling. Instead of using the patent-pending Zarlink Semiconductor scheduling algorithms,
scheduling for the CPU port is simply based on strict priority. That is, a frame in a high priority queue will always be
transmitted before a frame in a lower priority queue. There are four output queues to the CPU and one receive
queue.
4.0 Memory Interface
4.1 Overview
The MVTX2604 provides two 64-bit wide SRAM banks, SRAM Bank A and SRAM Bank B with a 64-bit bus
connected to each. Each DMA can read and write from both bank A and bank B. The following figure provides an
overview of the MVTX2604 SRAM banks.