Nvidia Linux Display Driver Beta is a proprietary OpenGL video driver that tries to bring bleeding-edge features for graphics cards produced by Nvidia and used under a GNU/Linux operating system. Both 32-bit (x86) and 64-bit (x86_64) architectures are supported at this time.
Fair warning!
Before reading further, please keep in mind that this is a Beta version. Even if it brings all the latest features and fixes annoying bugs from previous or current stable releases of the driver, it is still an unstable piece of software that may cause unpredictable issues or damage your hardware. Because of this, we do not recommend to install this Beta driver on production machines. You have been warned!
Installation instructions
For 32-bit systems:
Make sure that the kernel headers of your Linux distribution are installed, switch to a TTY console using the CTRL+ALT+F2 keyboard combination, locate the installer and type sh ./NVIDIA-Linux-x86-xxx.xx.run as root (where xxx.xx is the current version number of the package) to install the driver.
For 64-bit systems:
Making sure that the kernel headers of your Linux distribution are installed, switch to a TTY console using the CTRL+ALT+F2 keyboard combination, locate the installer and type sh ./NVIDIA-Linux-x86_64-xxx.xx.run as root (where xxx.xx is the current version number of the package) to install the driver.
During the installation, users will be asked if they want to edit the X configuration file manually or let the installer do all the work. Alternatively, after installation, you can run the nvidia-xconfig command via a X11 terminal emulator to set the new driver as default and generate the configuration file.
If you’re looking for the current stable releases of the Nvidia Linux Display Driver, do not hesitate to search our Linux section. Keep in mind though that Nvidia provides short and long lived branches, so we recommend to go with the long lived ones for extended support.
What is new in this release:
- Added support for the following GPUs:
- GeForce GTX 1080 Ti
- Quadro M520
- TITAN Xp
- Restored support for the following GPU:
- GRID K520
- Improved compatibility with recent kernels.
- Fixed a bug that caused "nvidia-settings --query all" to print many duplicate entries.
- Fixed a bug that caused applications to crash in some situations when calling glXMakeCurrent while OpenGL threaded optimizations were enabled.
- This frequently occurred when Steam was attempting to make a video appear full-screen.
- Fixed a bug that caused VDPAU applications to use the blit presentation queue when a previous VDPAU application didn't shut down cleanly.
- Fixed hangs and crashes that could occur when an OpenGL context is created while the system is out of available memory.
- Fixed a bug that caused corruption when OpenGL windows were moved or resized.
- Fixed a bug that caused X screens that use Option "UseDisplayDevice" "none" to be resized to 640x480 when using "xrandr -s" to change the screen configuration.
- Fixed a kernel crash that occurred when attempting to map large user memory allocations into CUDA.
- Disabled OpenGL threaded optimizations by default, initially enabled in 378.09, due to various reports of instability.
- Added support for the following Vulkan extensions:
- VK_EXT_acquire_xlib_display
- VK_EXT_display_control
- VK_EXT_display_surface_counter
- VK_EXT_direct_mode_display
- VK_KHX_external_memory
- VK_KHX_external_memory_fd
- VK_KHX_external_semaphore
- VK_KHX_external_semaphore_fd
- These extensions require a Vulkan loader version >= 1.0.42.
- Removed the X driver's logo splash screen and the corresponding NoLogo and LogoPath xorg.conf options.
- Added the "ResamplingMethod" MetaMode option, adding support for bicubic resampling methods when scaling screen transformations are in use. See the README for more details.
- Fixed a bug that left HDMI and DisplayPort audio muted after a framebuffer console mode was restored. For some displays, this caused the display to remain blank.
- Fixed a bug that caused audio over DisplayPort to stop working when the monitor was unplugged and plugged back in or awoken from DPMS power-saving mode.
- Fixed a regression that caused corruption in certain applications, such as window border shadows in Unity, after resuming from suspend.
What is new in version 375.10 Beta:
- Added support for the following GPUs:
- Quadro P6000
- Quadro P5000
- GeForce GTX 1050
- GeForce GTX 1050 Ti
- Added new X configuration options:
- ForceCompositionPipeline
- ForceFullCompositionPipeline
- which override the MetaMode tokens with the same names.
- Fixed a bug that caused issues with panning and cursor constraining when mixing PRIME-driven displays with natively driven displays.
- Fixed a bug that caused long delays when leaving the VT or disabling a display device while an OpenGL application is running.
- Improved console restore behavior on systems that use the UEFI Graphics Output Protocol, and most vesafb modes.
- Added support for the RandR TILE property added in RandR 1.5.
- Fixed a bug that prevented nvidia-bug-report.sh from finding relevant messages in kernel log files.
- Fixed a bug that allowed nvidia-installer to attempt loading kernel modules that were built against non-running kernels.
What is new in version 370.23 Beta:
- Added the ability to over- and under-clock certain GeForce GPUs in the GeForce GTX 1000 series and later. For GPUs that allow it, an offset can be applied to clock values in some clock domains of all performance levels. This clock manipulation is done at the user's own risk. See the README documentation of the "CoolBits" X configuration option for more details.
- Fixed a bug that prevented Vulkan applications from presenting from multiple queues to the same X11 swapchain.
- Added the "PixelShiftMode" MetaMode option, enabling support for 4K and 8K pixel shift displays. See the README for details.
What is new in version 367.18 Beta:
- Fixed a regression that reduced OpenGL performance on headless X server configurations.
- Fixed a memory leak that occurred after destroying a GLXWindow which still has the current context attached to it.
- Fixed a bug which caused EGL pbuffers to be created with both a front and back buffer, instead of a back buffer only, as is required for EGL.
- Added a new kernel module, nvidia-modeset.ko. This new driver component works in conjunction with the nvidia.ko kernel module to program the display engine of the GPU.
- nvidia-modeset.ko does not provide any new user-visible functionality or interfaces to third party applications. However, in a later release, nvidia-modeset.ko will be used as a basis for the modesetting interface provided by the kernel's direct rendering manager (DRM).
- Reduced flickering and delays when transitioning into or out of G-SYNC mode. As part of this change, monitors that have G-SYNC indicators in their on-screen displays will now always report that they are in G-SYNC mode. The OpenGL G-SYNC visual indicator can be enabled in nvidia-settings to determine when G-SYNC is actually being used.
- GLX protocol for the following OpenGL extension from OpenGL 3.0 has been promoted from unofficial to ARB approved official status:
- GL_EXT_draw_buffers2
- GLX protocol for the following OpenGL 3.0 commands:
- BindBufferRangeNV
- BindBufferBaseNV
- BeginTransformFeedbackNV
- EndTransformFeedbackNV
- GetTransformFeedbackVaryingEXT
- TransformFeedbackVaryingsEXT
- which are part of the following extensions:
- GL_NV_transform_feedback
- GL_EXT_transform_feedback
- has been promoted from unofficial to ARB approved official status.
- With the above changes, GLX protocol for OpenGL 3.0 has been promoted from unofficial to ARB approved official status.
- Added a new system memory allocation mechanism for large allocations in the OpenGL driver. This mechanism allows unmapping the allocation from the process when it is not in use, making more virtual address space available to the application. It is enabled by default on 32 bit OpenGL applications with Linux 3.11+ and glibc 2.19+. Memory allocated this way will consume space in /dev/shm. Setting the environment variable __GL_DevShmPageableAllocations to 2 will disable this feature
What is new in version 355.06 Beta:
- Fixed a bug that could cause data from one texture level to overwrite data from the next lowest level, when creating a texture view that did not include the higher of the two levels.
- Fixed a bug that could cause the nvidia-settings control panel to crash when updating the display layout.
- Corrected some erroneous reporting of support for GLX extensions: several extensions were being reported as supported for indirect GLX, which were in fact only supported under direct rendering.
- Added support for the following EGL extensions:
- EGL_KHR_swap_buffers_with_damage
- EGL_NV_stream_consumer_gltexture_yuv
- Replaced the build system for the NVIDIA kernel modules and updated the installer package and nvidia-installer to use the new build system and kernel module source code layout. For more information about the new build system and layout, see the README document at:
- ftp://download.nvidia.com/XFree86/packaging/linux/new-kbuild-for-355/
- Added experimental full OpenGL support to EGL.
- Marked the DeleteUnusedDP12Displays option as deprecated.
- Version 1.5.0 of the X Resize and Rotate specification added a note that dynamically-created outputs will not be destroyed, so this option is deprecated and will be removed in a future driver release.
- Added support for VDPAU profiles added in VDPAU 0.9:
- VDP_DECODER_PROFILE_H264_BASELINE
- VDP_DECODER_PROFILE_H264_CONSTRAINED_BASELINE
- VDP_DECODER_PROFILE_H264_EXTENDED
- VDP_DECODER_PROFILE_H264_PROGRESSIVE_HIGH
- VDP_DECODER_PROFILE_H264_CONSTRAINED_HIGH
- Fixed a bug that prevented more than one RandR output from sharing user-added modes.
- Fixed a bug that caused application-specified swap intervals to be ignored on some screens when using Xinerama.
- Fixed a bug that caused user-supplied RandR modes with nonsensical combinations of the +HSync, -HSync, +VSync, and -VSync flags to corrupt the mode list.
- Added support to make an OpenGL 3.0 and above context current without making current to any drawable.
What is new in version 352.09 Beta:
- Added the ability to configure the swapping behavior for quad-buffered stereo visuals. The driver can be configured to independently swap each eye as it becomes ready, to wait for both eyes to complete rendering before swapping, or to allow applications to specify which of these two behaviors is preferred by setting the swap interval. This setting can be adjusted in the nvidia-settings control panel, or via the NV-CONTROL API.
- Fixed a regression which caused the GPU fan status display to disappear from the nvidia-settings control panel.
- Added reporting of ECC error counts to the nvidia-settings control panel.
- Fixed a bug that sometimes prevented OpenGL sampler objects from being properly deallocated when destroying OpenGL contexts.
- Fixed a bug that caused GLX_EXT_framebuffer_sRGB to incorrectly report sRGB support in 30 bit-per-pixel framebuffer configurations.
- Added support for G-SYNC with sync-to-vblank disabled. This allows applications to use G-SYNC to eliminate tearing for frame rates below the monitor's maximum refresh rate but allow tearing above the maximum refresh rate in order to minimize latency.
- When G-SYNC is active and sync-to-vblank is enabled, the frame rate is limited to the monitor's maximum refresh rate.
- GLSL gl_Fog.scale is now +infinity when gl_Fog.end equals gl_Fog.start. Previously, the value 0 was used, but this broke certain applications such as the game XIII running on Wine (Wine bug #37068).
- Enabled G-SYNC by default when Unified Back Buffer (UBB) is disabled.
- Updated the NVIDIA GPU driver to avoid using video memory already in use by vesafb.
- Fixed a bug causing loss of stereo synchronization in certain Quadro Sync framelock configurations.
- Fixed a rare deadlock condition when running applications that use OpenGL in multiple threads on a Quadro GPU.
- Fixed a bug which caused truncation of the EGLAttribEXT value returned by eglQueryDeviceAttribEXT() on 64-bit systems.
What is new in version 349.16 Beta:
- Added support for G-SYNC monitors when used together with non-G-SYNC monitors.When G-SYNC is enabled, non-G-SYNC monitors will display with tearing.
- Fixed a bug that caused nvidia-settings to crash when assigning an attribute whose value is a display ID on a system with multiple X screens.
- Updated the reporting of in-use video memory in the nvidia-settings control panel to use the same accounting methods used in other tools such as nvidia-smi. nvidia-settings was not taking some allocations into account, e.g. framebuffer memory for the efifb console on UEFI systems, causing discrepancies in the values reported by different tools.
- Removed the "EnableACPIHotkeys" X configuration option. This option has been deprecated and disabled by default since driver version 346.35. On modern Linux systems, display change hotkey events are delivered to the desktop environment as key press events, and the desktop environment handles the display change by issuing requests through the X Resize and Rotate extension (RandR).
- Added support for lossless H.264/AVC video streams to VDPAU.
- Added support for VDPAU Feature Set F to the NVIDIA VDPAU driver. GPUs with VDPAU Feature Set F are capable of hardware-accelerated decoding of H.265/HEVC video streams.
- Fixed a bug that prevented GPU fan speed changes from getting reflected in the text box on Thermal settings page.
- Added nvidia-settings commandline support to query the current and targeted GPU fan speed.
- Added a checkbox to nvidia-settings to enable a visual indicator that shows when G-SYNC is being used.This is helpful for displays that don't indicate themselves whether they are operating in G-SYNC mode or normal mode. This setting can also be enabled by running the command line: nvidia-settings -a ShowGSYNCVisualIndicator=1
- Added support for the X.Org X server's "-background none" option. When enabled, the NVIDIA driver will try to copy the framebuffer console's contents out of /dev/fb0.If that cannot be done, then the screen is cleared to black.
- Added support for YUV 4:2:0 compression to enable HDMI 2.0 4K@60Hz modes when either the display or GPU is incapable of driving these modes in RGB 4:4:4.See NoEdidHDMI2Check in the README for details.
- Fixed a bug that could cause multi-threaded applications to crash when multiple threads used the EGL driver at the same time.
- Fixed a bug that caused Sync to VBlank to not work correctly with XVideo applications in certain configurations.
- Fixed a bug that prevented the X driver from correctly interpreting some X configuration options when a display device name was given with a GPU UUID qualifier.
What is new in version 346.22 Beta:
- Added support for X.Org xserver ABI 19 (xorg-server 1.17).
- Improved compatibility with recent Linux kernels.
- Fixed a bug that prevented internal 4K panels on some laptops from being driven at a sufficient bandwidth to support their native resolutions.
- Fixed a regression that prevented the NVIDIA kernel module from loading in some virtualized environments such as Amazon Web Services.
- Fixed a regression that caused displays to be detected incorrectly on some notebook systems.
- Fixed a bug that could cause X to freeze when using Base Mosaic.
- Fixed a regression that prevented the NVIDIA X driver from recognizing Base Mosaic layouts generated by the nvidia-settings control panel.
What is new in version 346.16 Beta:
- Added support for the following GPUs:
- GeForce GTX 970M
- GeForce GTX 980M
- Fixed a bug that caused a blank screen when setting a mode requiring YUV 4:2:0 compression. These modes are not currently supported.
- Fixed a bug that caused an incorrect DisplayPort link configuration to be displayed after a hotplug or unplug.
- Added support for decoding VP8 video streams using the NVCUVID API on GPUs with VP8 hardware decode support.
- Added support for the following EGL extensions:
- EGL_EXT_device_base
- EGL_EXT_platform_device
- EGL_EXT_output_base
- Added the ability to increase the operating voltage on certain GeForce GPUs in the GeForce GTX 400 series and later. Voltage adjustments are done at the user's own risk. See the documentation on the "CoolBits" X configuration option in the README for details.
- Added support for NVENC on GeForce GPUs. For more details on the NVENC SDK, see:
- https://developer.nvidia.com/nvidia-video-codec-sdk.
- Removed a sanity check in nvidia-installer that tested the availability of POSIX shared memory. The NVIDIA GPU driver has not required POSIX shared memory since release 270.xx.
- Added accelerated support for r8g8b8a8, r8g8b8x8, b8g8r8a8 and b8g8r8x8 RENDER formats.
- Updated nvidia-settings to take advantage of GTK+ 3, when available. This is implemented by building the nvidia-settings user interface into separate shared libraries (libnvidia-gtk2.so, libnvidia-gtk3.so), and loading the correct one at run-time.
- Added the nvidia-settings option --gtk-library to allow specifying the path of the directory containing the user interface library or the path and filename of the specific library to use.
- Added support in nvidia-settings for a GTK+ 3 user interface on x86 and x86_64.
- Added the nvidia-settings option --use-gtk2 to force the use of the GTK+ 2 UI library.
- Updated nvidia-installer to install a file in the system's xorg.conf.d directory, when a sufficiently new X server is detected, to cause the X server to load the "nvidia" X driver automatically if it is started after the NVIDIA kernel module is loaded.
- This feature is supported in X.Org xserver 1.16 and higher when running on Linux 3.9 or higher with CONFIG_DRM enabled.
- Improved the performance of nvidia-installer by enabling the use of parallel make when building the NVIDIA kernel modules. The concurrency level can be set with the --concurrency-level option, and defaults to the number of detected CPUs.
- Updated nvidia-installer to determine default installation locations for libraries based on the presence of known paths in the ldconfig(8) cache and the filesystem, rather than hardcoded distro-specific paths.
- Fixed a GLSL compiler bug that would produce corruption when running games such as Far Cry 3 in Wine.
- Fixed the EGL_KHR_stream_cross_process_fd extension.
- Fixed rendering corruption that sometimes happened when calling
- DrawElementsInstancedBaseVertexBaseInstance(),
- DrawElementsInstancedBaseInstance(),
- or DrawArraysInstancedBaseInstance().
- Dramatically improved OpenGL Framebuffer Object creation performance.
- Removed the limit on the maximum number of OpenGL Framebuffer Objects.
- Updated the NVIDIA OpenGL driver to prefer $XDG_CACHE_HOME over $HOME as the default location for storing the GL shader disk cache.
What is new in version 343.13 Beta:
- Fixed a bug that caused disabled displays to be implicitly included in the target selection for some queries and assignments on the nvidia-settings command line interface, in the absence of any explicit target selection.
- Added a new attribute to the NV-CONTROL API to query the current utilization of the video decode engine.
- Fixed a bug where the Exchange Stereo Eyes setting in nvidia-settings didn't work in certain stereo configurations.
- Worked around a Unigine Heaven 3.0 shader bug which could cause corruption when tessellation is enabled by implementing an application profile that uses the "GLIgnoreGLSLExtReqs" setting. See the documentation for the __GL_IGNORE_GLSL_EXT_REQS environment variable for more details.
- Fixed a memory leak when destroying EGL surfaces.
- Added support for multiple simultaneous EGL displays.
- Removed support for G8x, G9x, and GT2xx GPUs, and motherboard chipsets based on them. Ongoing support for new Linux kernels and X servers, as well as fixes for critical bugs, will be included in 340.* legacy releases through the end of 2019.
- Fixed a bug that could cause nvidia-installer to unsuccessfully attempt to delete the directory containing precompiled kernel module interfaces, on packages prepared with --add-this-kernel.
- Updated nvidia-installer to log uninstallation to a separate file from the installation log, and to attempt uninstalling previous driver installations using the installer program from the previous installation, when available.
What is new in version 340.17 Beta:
- Made various improvements and corrections to the information reported to GL applications via the KHR_debug and ARB_debug_output extensions.
- Fixed a bug that caused GLX applications which simultaneously create drawables on multiple X servers to crash when swapping buffers.
- Updated nvidia-settings to report all valid names for each target when querying target types, e.g. `nvidia-settings -q gpus`.
- Added support for controlling the availability of Fast Approximate Antialiasing (FXAA) on a per-application basis via the new __GL_ALLOW_FXAA_USAGE environment variable and the corresponding GLAllowFXAAUsage application profile key. See the README for details.
- Fixed a bug where indirect rendering could become corrupted on system configurations that disallow writing to executable memory.
- Updated the nvidia-settings Makefiles to allow nvidia-settings to be dynamically linked against the host system's libjansson. This option can be enabled by setting the NV_USE_BUNDLED_LIBJANSSON Makefile variable to 0. Please note that nvidia-settings requires libjansson version 2.2 or later.
- Added initial support for G-SYNC monitors. Additional details and system requirements can be found at: http://www.geforce.com/hardware/technology/g-sync
- Fixed an X driver bug that caused gamma ramp updates of the green channel at depth 15, on some recent GPUs, to be ignored.
What is new in version 337.19 Beta:
- Fixed a bug causing mode validation to fail for 4K resolutions over HDMI in certain situations.
- Added nvidia-settings command line controls for over- and under-clocking attributes. Please see the nvidia-settings(1) manual page for more details.
- Fixed several cosmetic issues in the clock control user interface of nvidia-settings.
- Added support for the GLX_EXT_stereo_tree extension. For more details, see the extension specification:
- http://www.opengl.org/registry/specs/EXT/glx_stereo_tree.txt
- Enabled support for using Unified Back Buffer (UBB) and 3D Stereo with the composite extension on Quadro cards. Using stereo with a composite manager requires a stereo-aware composite manager. Otherwise, only the left eye of stereo applications will be displayed. See the GLX_EXT_stereo_tree extension specification for more details.
What is new in version 337.12 Beta:
- Added support for the following GPUs:
- GeForce 830M
- GeForce 840M
- GeForce 845M
- GeForce GTX 850M
- GeForce GTX 860M
- GeForce GTX 870M
- GeForce GTX 880M
- GeForce GT 705
- GeForce GT 720
- Fixed a bug that could cause OpenGL programs to freeze under some low memory conditions.
- Updated the display configuration page in nvidia-settings to uniquely identify DisplayPort 1.2 monitors by displaying the monitor GUIDs.
- Fixed a bug that could cause ECC settings to be displayed incorrectly in nvidia-settings when changing ECC settings on a multi-GPU system.
- Removed the "OnDemandVBlankInterrupts" X configuration option: this option has been enabled by default since version 177.68 of the NVIDIA Unix driver, and the documentation had not been updated to reflect the new default value.
- Fixed a bug that caused GPU errors when hotplugging daisy-chained DisplayPort 1.2 displays.
- Updated the color correction settings page in the nvidia-settings control panel to reflect gamma changes made by other RandR clients while the control panel was already running.
- Fixed a bug that prevented the use of multiple simultaneous X servers on UEFI systems.
- Updated the nvidia-settings source package to build libXNVCtrl when building nvidia-settings, instead of relying on a pre-built library.
- Added the ability to over- and under-clock certain GeForce GPUs in the GeForce GTX 400 series and later. For GPUs that allow it, an offset can be applied to clock values in some clock domains of some performance levels. This clock manipulation is done at the user's own risk. See the README documentation of the "CoolBits" X configuration option for more details.
- Updated the minimum required version of GTK+ from 2.2 to 2.4 for nvidia-settings.
- Renamed the RandR output property _GUID to GUID now that it is an official property documented in randrproto.txt:
- http://cgit.freedesktop.org/xorg/proto/randrproto/commit/?id=19fc4c5a72eb9919d720ad66734029d9f8e313b1
- Reduced CPU utilization and GPU memory utilization of the NVIDIA EGL driver.
- Added support for the following EGL extensions:
- - EGL_EXT_buffer_age;
- - EGL_EXT_client_extensions;
- - EGL_EXT_platform_base;
- - EGL_EXT_platform_x11.
- Renamed the "Clone" setting of the "MetaModeOrientation" X configuration option to "SamePositionAs", to make clear that this setting applies to the position only, and not to the resolution of modes in the MetaMode.
- Added NV-CONTROL attribute NV_CTRL_VIDEO_ENCODER_UTILIZATION to query utilization percentage of the video encoder engine.
- Added support for the GLX_NV_delay_before_swap extension. For more details, see the extension specification:
- http://www.opengl.org/registry/specs/NV/glx_delay_before_swap.txt
- Report correct buffer sizes for RGB GLX visuals, GLXFBConfigs, and EGLConfigs. Previously, RGB10 and RGB8 formats were reported as having 32 bits, and RGB5 formats were reported as having 16 bits. Now they are correctly reported as 30, 24, and 15 bit formats respectively as required by the GLX and EGL specifications.
What is new in version 334.16 Beta:
- Fixed a bug that could cause nvidia-settings to compute incorrect gamma ramps when adjusting the color correction sliders.
- Updated the nvidia-settings control panel to allow the selection of display devices using RandR and target ID names when making queries targeted towards specific display devices.
- Fixed a bug that prevented some dropdown menus in the nvidia-settings control panel from working correctly on older versions of GTK+ (e.g. 2.10.x).
- Updated the nvidia-settings control panel to provide help text for application profile keys and suggestions for valid key names when configuring application profiles.
- Updated the nvidia-settings control panel to populate the dropdown menu of stereo modes with only those modes which are available.
- Fixed a bug that could cause applications using the OpenGL extension ARB_query_buffer_object to crash under Xinerama.
- Fixed a bug that caused high pixelclock HDMI modes (e.g. as used with 4K resolutions) to be erroneously reported as dual-link in the nvidia-settings control panel.
- Fixed a bug that prevented some DisplayPort 1.2 displays from being properly restored after a VT switch.
- Renamed per GPU proc directories in /proc/driver/nvidia/gpus/ with GPU's bus location represented in "domain:bus:device.function" format.
- Added 64-bit EGL and OpenGL ES libraries to 64-bit driver prackages.
- Changed format of "Bus Location" field reported in the /proc/driver/nvidia/gpus/0..N/information files from "domain:bus.device.function" to "domain:bus:device.function" to match the lspci format.
- Fixed a bug in the GLX_EXT_buffer_age extension where incorrect ages would be returned unless triple buffering was enabled.
- Changed the driver's default behavior to stop deleting RandR 1.2 outputs corresponding to unused DisplayPort 1.2 devices. Deleting these outputs can confuse some applications. Added a new option, DeleteUnusedDP12Displays, which can be used to turn this behavior back on. This option can be enabled by running sudo nvidia-xconfig --delete-unused-dp12-displays
- Improved support for the __GL_SYNC_DISPLAY_DEVICE and VDPAU_NVIDIA_SYNC_DISPLAY_DEVICE environment variables in certain configurations. Both environment variables will now recognize all supported display device names. See "Appendix C. Display Device Names" and "Appendix G. VDPAU Support" in the README for more details.
- Improved performance of the X driver when handling large numbers of surface allocations.
- Fixed a bug that caused PBO downloads of cube map faces to retrieve incorrect data.
- Added experimental support for ARGB GLX visuals when Xinerama and Composite are enabled at the same time on X.Org xserver 1.15.
What is new in version 331.17 Beta:
- Fixed a bug that prevented configuration files containing application profiles from being loaded when directories were present in the application profile configuration search path.
- Deferred initialization of libselinux in the NVIDIA OpenGL driver, in order to avoid a problem where libselinux might not be ready when the NVIDIA libGL shared library is first loaded.
- Fixed a bug that could lead to memory exhaustion in OpenGL applications running on 32-bit systems.
- Added nvidia-uvm.ko, the NVIDIA Unified Memory kernel module, to the NVIDIA Linux driver package. This kernel module provides support for the new Unified Memory feature in an upcoming CUDA release.
What is new in version 331.13 Beta:
- Fixed a bug that caused the X server to fail to initialize when DisplayPort 1.2 monitors were assigned to separate X screens on the same GPU.
- Fixed a bug that could cause a deadlock when forking from OpenGL programs which use some malloc implementations, such as TCMalloc.
- Fixed a bug that prevented Warp & Blend settings from being retained across display configuration changes.
- Fixed a bug that prevented some settings changes made via the nvidia-settings command line interface from being reflected in the nvidia-settings graphical user interface.
- Changed the clipping behavior of the NVIDIA X driver on Trapezoids and Triangles for some RENDER operations to match the behavior in newer versions of Pixman:
- http://lists.freedesktop.org/archives/pixman/2013-April/002755.html
- Fixed a bug in MetaMode tracking that could cause spurious error messages to be printed when attempting to add or delete Metamodes via NV-CONTROL.
- Fixed a bug that caused the NVIDIA X driver to attempt to load the X11 "shadow" module unconditionally, even in situations where the driver had no need to use the module. This could result in the printing of spurious error messages, on X servers where the module was not present.
- Fixed a bug that prevented display configuration changes made with xvidtune(1) from working correctly.
- Fixed a bug that occasionally caused display corruption in GLX applications while changing the display configuration.
- Fixed a bug that prevented glReadPixels from working correctly when reading from Pixel Buffer Objects over indirect rendering, when the image width is not a multiple of 4.
- Added a new NV-CONTROL attribute, NV_CTRL_BACKLIGHT_BRIGHTNESS, for controlling backlight brightness.
- Fixed a bug that prevented nvidia-settings from creating display device configuration pages for newly connected DisplayPort 1.2 Multi Stream Transport downstream devices.
- Added GPU utilization reporting to the nvidia-settings control panel.
- Fixed a bug in the nvidia-settings control panel that prevented users from configuring stereo, when stereo was not already configured.
- Added support for reporting the tachometer-measured fan speed on capable graphics boards via nvidia-settings and the NV-CONTROL API. The preexisting mechanism for reporting fan speed reports the speed of the fan as programmed by the driver. For example, `nvidia-settings --query=[fan:0]/GPUCurrentFanSpeedRPM`.
- Fixed a regression that caused GPUs that do not support graphics to not appear in nvidia-settings.
- Fixed a bug that caused DisplayPort 1.2 multi-stream devices to stop working if they were unplugged and plugged back in while they were active in the current MetaMode.
- Added support for multiple NVIDIA kernel modules. This feature allows users to assign different GPUs in the system to different NVIDIA kernel modules, potentially reducing the software overhead of coordinating access to multiple GPUs.
- Added support for the EGL API on 32-bit platforms. Currently, the supported client APIs are OpenGL ES 1.1, 2.0 and 3.0, and the only supported window system backend is X11.
- Add a new option, AllowEmptyInitialConfiguration, which allows the X server to start even if no connected display devices are detected at startup. This option can be enabled by running "sudo nvidia-xconfig --allow-empty-initial-configuration"
- This option is useful in RandR 1.4 display offload configurations where no display devices are connected to the NVIDIA GPU when the X server is started, but might be connected later.
- Updated nvidia-installer to provide a scrollable text area for displaying messages from the /usr/lib/nvidia/alternate-install-present and /usr/lib/nvidia/alternate-install-available distro hook files. This allows for longer messages to be provided in these files.
- Updated nvidia-installer to avoid recursing into the per-kernel "build" and "source" directories when searching for conflicting kernel modules in /lib/modules.
- Added a system memory cache to improve the performance of certain X rendering operations that use software rendering fallbacks. The X configuration option "SoftwareRenderCacheSize" may be used to configure the size of the cache.
- Removed the "DynamicTwinView" X configuration option: dynamic reconfiguration of displays is always possible, and can no longer be disabled.
- Fixed a bug that caused nvidia-settings to display incorrect information in its display configuration page when all displays on an X screen were turned off.
- Updated nvidia-installer to only install the libraries libvdpau and libvdpau_trace if an existing installation of libvdpau is not detected on the system. This behavior can be overridden with the --install-vdpau-wrapper and --no-install-vdpau-wrapper options.
- Future NVIDIA Linux installer packages will no longer include copies of libvdpau or libvdpau_trace: VDPAU users are recommended to install these libraries via other means, e.g. from packages provided by their distributors, or by building them from the sources available at:
- http://people.freedesktop.org/~aplattner/vdpau/
Comments not found