6-36
MPC7400 RISC Microprocessor Users Manual
Memory Performance Considerations
¥
Write-backConTguring a memory region as write-back lets a processor modify
data in the cache without updating system memory. For such locations, memory
updates occur only on modiTed cache block replacements, cache ushes, or when
one processor needs data that is modiTed in anothers cache. Therefore, conTguring
memory as write-back can help when bus trafTc could cause bottlenecks, especially
for multiprocessor systems and for regions in which data, such as local variables, is
used often and is coupled closely to a processor.
If multiple devices use data in a memory region marked write-through, snooping
must be enabled to allow the copyback and cache invalidation operations necessary
to ensure cache coherency. The MPC7400s snooping hardware keeps other devices
from accessing invalid data. For example, when snooping is enabled, the MPC7400
monitors transactions of other bus devices. For example, if another device needs data
that is modiTed on the MPC7400s cache, the access is delayed so the MPC7400 can
copy the modiTed data to memory.
Write-throughStore operations to memory marked write-through always update
both system memory and the on-chip cache on cache hits. Because valid cache
contents always match system memory marked write-through, cache hits from other
devices do not cause modiTed data to be copied back as they do for locations marked
write-back. However, all write operations are passed to the bus, which can limit
performance. Load operations that miss the on-chip cache must wait for the external
store operation.
Write-through conTguration is useful when cached data must agree with external
memory (for example, video memory), when shared (global) data may be needed
often, or when it is undesirable to allocate a cache block on a cache miss.
¥
Chapter 3, òL1 and L2 Cache Operation,ó describes the caches, memory conTguration, and
snooping in detail.
6.5.2 Effect of TLB Miss on Performance
TLB misses causes a hardware table search for the PTE tables and the TLB to be loaded.
Table 6-2 shows some estimated latencies. These latencies are a sum of the latencies for the
table search, TLB reload, and a reaccess of the TLB.
Table 6-2.
Effect of TLB Miss on Performance
Cache Hit/Miss
Latency
100% L1 cache hit
9 cycles
100% L1 cache miss with 100% L2 cache hit with L2 core running at 1:1
15 cycles
100% L1 cache miss with 100% L2 cache hit with L2 core running at 1.5:1
17 cycles
100% L1 cache miss with 100% L2 cache hit with L2 core running at 2:1
18 cycles
100% L1 & L2 cache miss with bus running at 2.5:1 with 6:3:3:3 memory
28 cycles