
6
SDRAM Performance Monitors with the élanSC520 Microcontroller Application Note
feature of the read buffer should be enabled. A high
read buffer hit average when read prefetching is
enabled implies that the read prefetch data is being
used frequently. A low read buffer hit average with the
read prefetch feature enabled can imply that the read
prefetched data is not being utilized; thus, disabling the
read prefetch feature can be desirable.
SDRAM Page and Bank Miss Monitoring
SDRAM devices support either two or four internal
banks. The page width of the internal banks are
defined by the device’s symmetry. The élanSC520
microcontroller’s SDRAM controller supports SDRAM
devices with 8-, 9-, 10-, or 11-bit column address
widths resulting in either 1-, 2-, 4-, or 8-KB page width
for the élanSC520 microcontroller’s 32-bit data bus
width.
Overhead is associated with opening an internal bank.
Therefore, the more often an open internal bank is
utilized, the greater the overall performance. An
internal bank’s page is left open after each access. A
page miss occurs when a master access is made to an
internal bank page that is not open within that same
internal bank. The penalty incurred results in a delay
associated with closing the currently open page and
opening the requested page of the requested internal
bank. A bank miss occurs when a master accesses an
internal bank where no current open pages exist; for
example, after a refresh cycle. The penalty incurred
results in a delay associated with opening the
requested page.
Either of the two performance monitor resources can
be configured to provide a page and bank miss average
of the number of read, write, or write buffer transfers
resulting in a page bank miss to SDRAM. The
performance monitor scores SDRAM page and bank
accesses on the basis of an atomic request, during the
same bus tenure, independent of burst length (i.e.,
complete cycle regardless of the amount of data
requested during the same burst tenure.) Therefore,
each request is monitored, instead of each DWORD
transferred, during that requested cycles tenure. This
occurs because after an access, the page within each
bank of the SDRAM devices remains open,
independent of the number of DWORDs requested
during the cycle request. An Am5
x
86 CPU, PCI host
bridge, or GP bus DMA request of two, three, or four
DWORDs that miss within an open page are counted
by the performance monitors as only one miss to the
page because the remaining DWORDs during a burst
request are guaranteed to result in a page hit during
that same bus tenure. Thus, the first access is counted
independently of the amount of data transferred in that
single cycles bus tenure. This occurs instead of unfairly
counting one page bank miss and three following page
bank hits during a burst of four DWORDs because the
remaining three transfers always result in a page hit.
Four independent read or write requests of one
DWORD each result in four independent hit/misses by
the performance monitor because each read or write
transfer is an individual request. The write buffer
always writes single DWORDs; therefore, each write
buffer write is counted independently. This occurs
because the write-buffer write-backs are a single
DWORD and provide a read-around-write function. For
example, one DWORD of an entire cache line written
into the write buffer is written into SDRAM while the
remaining DWORDs of the same cache line remain in
the write buffer. If a higher priority read request
demands access to SDRAM over the write buffer’s
access, the dynamics of the page bank relationship are
changed, resulting in a different page bank miss
occurrence over the same scenario where a write burst
occurs, but with the write buffer disabled. A ratio of
page-bank-miss/page-bank-hit is provided by the
performance monitors.
SDRAM Page and Bank Miss Analysis
The overall performance of SDRAM is directly
impacted by the overhead associated with a page or
bank miss. The more often an access occurs to an
open page (spatially local to that page), the faster data
is returned to the requesting master during a read
access or written to SDRAM during a master write
access. Master accesses that are sequential will hit
within an open SDRAM page and yield higher SDRAM
access performance than master accesses that result
in heavy thrashing of the pages.
Many system parameters and configurations contribute
to the dynamics of SDRAM page and bank misses.
Some of these are program flow, Am5
x
86 CPU cache
enable, Am5
x
86 CPU cache write-through vs. write-
back, number of GP bus DMA channels active, and
number of PCI masters and their burst size. For
example, read accesses initiated by the Am5
x
86 CPU’s
prefetcher are typically sequential until the program
flow changes as a result of a program branch, and can
utilize an open SDRAM page more frequently. Am5
x
86
CPU write accesses tend to be directly program-
dependent and not predictable and can result in
SDRAM page thrashing. These dynamics change
when the Am5
x
86 CPU’s cache is enabled and when
the dynamics are in write-though or write-back mode.
PCI read transfers are linear, and those that request a
large burst utilize an open page. Even though the
dynamics associated with program flow and master
accesses heavily dictate the page and bank miss rates,
the user has control over some SDRAM parameters
that can lessen the impact associated with system
dynamics. These dynamics are as follows:
I
Adjustable page widths by selecting devices with
either 8-, 9-, 10-, or 11-bit column addresses