82815 GMCH
R
146
Datasheet
4.8.4.
Setup
The setup stage of the pipeline takes the input data associated with each vertex of the line or triangle
primitive and computes the various parameters required for scan conversion. In formatting this data, the
GMCH maintains sub-pixel accuracy. Data is dynamically formatted for each rendered polygon and
output to the proper processing unit. As part of the setup, the GMCH removes polygons from further
processing, if they are not facing the user’s viewpoint (referred to as “ Back Face Culling”).
4.8.5.
Texturing
The GMCH allows an image, pattern, or video to be placed on the surface of a 3D polygon. Textures
must be located in system memory. Being able to use textures directly from system memory means that
large complex textures can easily be handled without the limitations imposed by the traditional approach
of only using the display cache.
The texture processor receives the texture coordinate information from the setup engine and the texture
blend information from the scan converter. The texture processor performs texture color or chroma-key
matching, texture filtering (anisotropic, bilinear, and trilinear interpolation), and YUV to RGB
conversions.
The GMCH supports up to 11 Levels-of-Detail (LODs) ranging in size from 1024x1024 to 1x1 texels.
(A texel is defined as a texture map pixel). Textures need not be square. Included in the texture processor
is a small cache that provides efficient mip-mapping.
Nearest
. Texel with coordinates nearest to the desired pixel is used. (This is used if only one LOD
is present).
Linear
. A weighted average of a 2x2 area of texels surrounding the desired pixel are used. (This is
used if only one LOD is present).
Mip
Nearest
. This is used if many LODs are present. The appropriate LOD is chosen and the texel
with coordinates nearest to the desired pixel are used.
Mip
Linear
. This is used if many LODs are present. The appropriate LOD is chosen and a weighted
average of a 2x2 area of texels surrounding the desired pixel are used. This is also referred to as bi-
linear mip-mapping.
Trilinear
. Tri-linear filtering blends two mip maps of the same image to provide a smooth transition
between different mips (floor and ceiling of the calculated LOD).
Anisotropic
. This can be used if multiple LODs are present. This filtering method improves the
visual quality of texture-mapped objects when viewed at oblique angles (i.e., with a high degree of
perspective foreshortening). The improvement comes from a more accurate (anisotropic) mapping
of screen pixels onto texels – where using bilinear or trilinear filtering can yield overly-blurred
results. Situations where anisotropic filtering demonstrates superior quality include text viewed at an
angle, lines on roadways, etc.