Chapter 1. Overview
1-17
MPC7400 Microprocessor Features
HID0[ICFI]. The instruction cache can be locked by setting HID0[ILOCK]. The instruction
cache supports only the valid/invalid states.
The MPC7400 also implements a 64-entry (16-set, four-way set-associative) branch target
instruction cache (BTIC). The BTIC is a cache of branch instructions that have been
encountered in branch/loop code sequences. If the target instruction is in the BTIC, it is
fetched into the instruction queue a cycle sooner than it can be made available from the
instruction cache. Typically the BTIC contains the Trst two instructions in the target stream.
The BTIC can be disabled and invalidated through software.
For more information and timing examples showing cache hit and cache miss latencies, see
Section 6.3.2, òInstruction Fetch Timing.ó
1.2.5 L2 Cache Implementation
The L2 cache is a uniTed cache that receives memory requests from both the L1 instruction
and data caches independently. The L2 cache is implemented with an on-chip, two-way,
set-associative tag memory, and with external, synchronous SRAMs for data storage. The
external SRAMs are accessed through a dedicated L2 cache port that supports a single bank
of 512-Kbyte, 1-Mbyte, or 2-Mbyte synchronous SRAMs. The L2 cache normally operates
in write-back mode and supports system cache coherency through snooping.
Depending on its size, the L2 cache is organized into 32-, 64-, or 128-byte lines. Lines are
subdivided into 32-byte sectors (blocks), the unit at which cache coherency is maintained.
The L2 cache controller contains the L2 cache control register (L2CR), which includes bits
for enabling parity checking, setting the L2-to-processor clock ratio, and identifying the
type of RAM used for the L2 cache implementation. The L2 cache controller also manages
the L2 cache tag array, two-way set-associative with 8K tags per way. Each sector (32-byte
cache block) has its own valid, shared. and modiTed status bits. The L2 implements the
MERSI protocol using three status bits per sector.
Requests from the L1 cache generally result from instruction misses, data load or store
misses, write-through operations, or cache management instructions. Requests from the L1
cache are looked up in the L2 tags and serviced by the L2 cache if they hit; they are
forwarded to the bus interface if they miss.
The L2 cache can accept multiple, simultaneous accesses. The L1 instruction cache can
request an instruction at the same time that the L1 data cache is requesting data. The L1
data cache requests are handled through the data reload table (shown in Figure 1-1), which
can have up to eight outstanding data cache misses. The L2 cache also services snoop
requests from the bus. If there are multiple pending requests to the L2 cache, snoop requests
have highest priority. The next priority are load and store requests from the L1 data cache.
The next priority are instruction fetch requests from the L1 instruction cache. For more
information, see Chapter 3, òL1 and L2 Cache Operation.ó