|Deletions are marked like this.||Additions are marked like this.|
|Line 29:||Line 29:|
|The GEM (Graphics Execution Manager) is a modern memory manager specialized for use in device drivers for graphics chipsets. It manages graphics memory, controls the execution context and manages the Non-Uniform Memory Access (NUMA) environment on modern graphics chipsets. Multiple applications can share graphics device resources without the need to store and restore the entire graphics card state between changes. GEM ensures conflict-free sharing of data between applications by managing the memory synchronization. It uses many existing kernel subsystems for its operations and hence has a very modest code size.||The GEM (Graphics Execution Manager) is a modern memory manager specialized for use with UMA (Unified Memory Architecture) graphics chipsets. It manages graphics memory, controls the execution context and manages cache domains in the GPU. Multiple applications can share graphics device resources without the need to store and restore the entire graphics card state between changes. GEM ensures conflict-free sharing of data between applications by managing the memory synchronization. It uses many existing kernel subsystems for its operations and hence has a very modest code size.|
|Line 31:||Line 31:|
|GEM was developed by Intel, starting in May 2008, as a minimalist, easy-to-use alternative to the Translation Table Maps memory manager developed by Tungsten Graphics. It builds on lessons learned in the development of TTM, however it does not (yet) provide a complete replacement, thus some drivers still use TTM.||GEM was developed by Intel, starting in May 2008, as a UMA-centric, easy-to-use alternative to the Translation Table Maps memory manager developed by Tungsten Graphics. TTM is still in use by drivers with more complex GPU architectures like radeon and nouveau, though with different APIs that it started with.|
X is full of cryptic acronyms and confusing terminology. This page aims explains the terms, which can help tell the difference between a real error and just innocuous technobabble.
Please do add to this as you learn confusingly cool new X.org terms!
What is "DRI"? What is "DRM"? "DDX"?
DRI, DRM, and DDX all refer to different "components" that make up the video driver "stack".
- 2D DDX driver: The 2D video "Device Dependent X" (DDX) driver is what most ordinary 2D client applications use. It handles selecting the video mode and resolution, provides 2d and video acceleration, and does the initial setup for DRI. Ex. xserver-xorg-video-radeon.
- DRI driver: The "Direct Rendering Infrastructure" (DRI) driver is responsible for programming the 3D hardware. Usually DRI drivers use the Mesa state machine. In the DRI, the GLX client-side library loads a DRI driver, named radeon_dri.so.
- Kernel DRM driver: The "Direct Rendering Manager" is the kernel-side component of the DRI that allows applications direct access to the graphics hardware. The DRM is responsible for security and handling resource contention. Ex. radeon.ko
See X/Architecture for a deeper discussion of all of this.
("DRM" in X.org terminology has nothing to do with the http://en.wikipedia.org/wiki/Digital_rights_management DRM access control technology)
What is "OpenGL", "GL", "GLU", "GLUT", etc.?
OpenGL is a 3D graphics abstraction library. GLX packages up OpenGL commands into network packages to send across the X11 network pipe, allowing you to run accelerated 3D remotely. GLU works on top of Open"GL" providing "U"tilities for building spheres, doing collision detection, etc. GLUT is a demonstration toolkit library on top of GLU.
Mesa is an open-source implementation of OpenGL, that provides both 3D software rendering and DRI drivers for 3D hardware rendering.
What are "TTM" and "GEM"? Why do we have both?
The GEM (Graphics Execution Manager) is a modern memory manager specialized for use with UMA (Unified Memory Architecture) graphics chipsets. It manages graphics memory, controls the execution context and manages cache domains in the GPU. Multiple applications can share graphics device resources without the need to store and restore the entire graphics card state between changes. GEM ensures conflict-free sharing of data between applications by managing the memory synchronization. It uses many existing kernel subsystems for its operations and hence has a very modest code size.
GEM was developed by Intel, starting in May 2008, as a UMA-centric, easy-to-use alternative to the Translation Table Maps memory manager developed by Tungsten Graphics. TTM is still in use by drivers with more complex GPU architectures like radeon and nouveau, though with different APIs that it started with.
What is "DRM fence"?
A "fence" in the graphics world is a barrier or synchronization point for execution. Generally you wait for a fence to pass before continuing with some rendering or performing certain operations.
What is "Tearing"?
When your graphics card is working faster than your monitor, the graphics card can produce more frames in the frame buffer than the monitor can actually display. When the monitor grabs a new frame from the graphics card's primary buffer, the frame may be made up of two or more different frames which overlap. This results in the screen image appearing out of alignment, or "torn". This is especially noticeable in games or video where there is rapid movement.
What is "VSync"
VSync synchronizes the vertical refresh rate of your monitor with the graphics card's drawing speed (FPS). In other words, the graphics card draws screens at an exact rate to match what the monitor needs, eliminating the possibility of tearing (and reducing power consumption of the graphics card).
However, this synchronization can slow down the graphics card since it has to provide whole frames to the monitor at the monitor's whim. This caps the maximum FPS rate - so if your monitor is 60Hz, you get 60FPS max. Usually that's okay, but if there is even a slight mis-timing of the synchronization it can throw things off majorly, causing every other refresh to be missed. See http://www.tweakguides.com/Graphics_9.html for more detailed info.
What is "Framebuffer Compression"?
Framebuffer compression is a feature of Intel GPUs intended to save power. It works by run length encoding (a compression technique) the scanout buffer (i.e. the one you see on your screen) to a compressed buffer. Subsequently, pipes can use the compressed buffer to send data out to the monitor, which saves memory bandwidth and thus power.
What is "Tiling"?
Tiling is a way of addressing graphics data. Rather than simply accessing memory in a linear fashion (i.e. next pixel is always at the next address in memory), tiling allows the GPU to access pixels "nearby" (usually in a small "tile" around the pixel). This reduces TLB pressure by making GTT lookups less frequent for a given operation. It's especially important for performance on Intel chips.
What are "MTRR"s?
MTRR stands for Memory Type Range Register. MTRRs are part of the CPU and control how the CPU will access given ranges of memory (i.e. cached, uncached or write combined).
What is an "ioctl"?
The ioctl system function (short for http://en.wikipedia.org/wiki/Ioctl "Input/Output Control") allows making changes to hardware device and other kernel parameters in the kernel. It is part of the user-to-kernel interface of the operating system.
From X.org's perspective, ioctl errors indicate that X attempted to do a kernel call but it failed, either due to a bug in the kernel (such as an unsupported operation) or incorrect calling syntax by X.
What is "MMIO"?
Memory-Mapped Input-Output (MMIO) is a method of directly reading and writing to hardware and memory, as opposed to ioctl's. Applications that interact with devices open a location on the filesystem corresponding to the device, as they would for an ioctl call, but then use memory mapping system calls to tie a portion of their address space to that of the kernel, so they can read and write both operations and data directly.
What is the "Ring Buffer"? What does it mean when it's printed in an I830WaitLpRing bug?
The ring buffer is the chunk of memory that contains commands we send down to the GPU. A WaitLpRing bug is generally a GPU hang, which can be caused by sending the GPU a bad instruction or address.
What does "pipe-A underrun" mean?
Pipe underruns occur when a display pipe (the hardware that actually sends data out to your monitor) can't get the data it needs from memory in time to send it out.
What does "EQ" mean in the common "[mi] EQ overflowing" stuck-in-a-loop bugs?
I think EQ stands for Event Queue. When you see this it means the GPU hung but the server rather than driver noticed.
What are "BO backbuffers" and "frontbuffers"?
BOs are Buffer Objects, or the fundamental memory units of GEM and other memory managers. The front buffer is the buffer you see on your screen (also called the scanout buffer or display buffer). The back buffer is generally private to a given application. In OpenGL, the back buffer is where drawing happens. It doesn't show up in the front buffer until a buffer swap occurs. In single buffered rendering drawing occurs directly to the front buffer (usually leading to ugly tearing and partly drawn artifacts).
What are "GTT entries"?
The GPU has an IOMMU of its own. The GTT (or Graphics Translation Table) is the set of page table entries for the GPU IOMMU (GTT is often used to refer to both the page tables and the IOMMU). In order for the GPU to perform an operation on memory (e.g. use it as a scanout buffer, render to it) the memory must be pointed to by GTT entries. That means it must be pinned into physical memory (i.e. not swappable to disk) and GTT pointers must be updated to point at it.
What are "SAREA"s?
The SAREA is part of the DRI1 design. It's used for communicating important information like the location of the front, back and depth buffers and other info between the 2D, 3D and kernel drivers. It no longer exists with DRI2, since clients now talk to the display server through the DRI2 protocol when they need to get buffer addresses.
What is a "RAMDAC"?
A RAMDAC is a RAM Digital to Analog Converter. DACs are used to take pixel data from a pipe and turn it into signals appropriate for the various output types, e.g. VGA, DVI or LVDS.
"Error activating XKB configuration." after upgrade
- After upgrading from 6.10 to 7.04, the following error message can appear on reboot or when trying to change keyboard preferences: Error activating XKB configuration. It can happen under various circumstances:
- a bug in libxklavier library
- a bug in X server (xkbcomp, xmodmap utilities)
- X server with incompatible libxkbfile implementation
There are a few causes for this behavior, and several possible solutions to try:
- Disable the following option in /etc/X11/xorg.conf:
# Option "XkbVariant" "qwerty"
Especially if you have a non-us keyboard, doublecheck the XkbLayout option. Change the value to one that matches your language (for example, "pl" for Polish, "gb" for British, etc.)
Option "XkbLayout" "ie"
Using gconf-editor, clear both the "layouts" and "options" parameters located at /desktop/gnome/peripherals/keyboard/kbd. See http://lists.freebsd.org/pipermail/freebsd-gnome/2005-December/013059.html
intel(0): I830 Vblank Pipe Setup Failed 0
Harmless message that can be safely ignored. (Removed in newer versions of -intel)