GPU processing with Spatial Analyst

Available with Spatial Analyst license.

Available with 3D Analyst license.

ArcGIS Pro offers enhanced performance with the use of graphics processing unit (GPU) processing for some tools. This technology takes advantage of the computing power of the graphics card in modern computers to improve the performance of certain operations.

The following tools currently support GPU processing:

GPU processing

A GPU is a hardware component in a computer with the purpose of accelerating the rendering of graphics in the screen display. Recently, the processing power of GPUs has been directed to perform general computing tasks.

For tools that are GPU accelerated, the raster processing task is directed to the GPU instead of the central processing unit (CPU). Certain types of operations benefit from this approach. In those cases, the software divides the processing task into many small portions, which are sent to the GPU for computation. The GPU performs the calculations in parallel for all of the small portions at a faster rate. The resulting data is sent back, and the software reassembles the individual components into a final complete product.

Supported GPU cards and drivers

Various solutions are available for GPU processing. Currently, only NVIDIA GPUs with CUDA compute capability of version 5.2 or later are supported by the tool. Your system must have an appropriate card installed to access this capability.

To check the type of graphics card on a Windows computer, open the Device Manager of your system and expand Display Adaptors. The brand names and types of graphics cards are listed there. If no NVIDIA brand graphics card is listed, you cannot access this capability and the tool will use the CPU.

If an NVIDIA graphics card is listed, review the type of GPU installed on your system using the NVIDIA Control Panel:

  1. Right-click an empty area on your desktop.
  2. In the context menu, click NVIDIA Control Panel.
  3. In the control panel window, go to the Help menu and click System Information. All the NVIDIA graphics cards, their driver versions, and other properties are displayed.

Once you determine the type of NVIDIA GPU card, look up its CUDA compute capability on the NVIDIA help page for CUDA GPUs. In the relevant section, locate the specific GPU card and note the compute capability value listed for it. It must be 5.2 or higher to be supported by the tool.

When a GPU card is installed on a machine, it comes with a default driver. Before running an analysis tool that uses a GPU, you must update the GPU card drivers to the latest available version on the NVIDIA driver update page.

GPU configuration

The tool uses one GPU for computation. However, if your computer has only one GPU, it will be used both for display and for computation. In this case, a warning message will be reported when the tool is run indicating that the display may appear unresponsive. For spatial analysis, it is recommended that you use two GPUs: one for display and the other for computation.

In the case of multiple GPUs in a system, two system environment variables determine which GPU will be used: CUDA_DEVICE_ORDER and CUDA_VISIBLE_DEVICES.

By default, the CUDA_DEVICE_ORDER system environment variable is set to FASTEST_FIRST. This means that multiple GPUs in a machine will be numbered from fastest to slowest, starting from 0. To number the GPUs based on how they are installed in a machine, modify the CUDA_DEVICE_ORDER environment setting to PCI_BUS_ID.

A GPU in the Tesla Computer Cluster (TCC) is considered to be faster than a GPU in the default Windows Display Driver Model (WDDM). Consequently, it will be listed first (index 0) and will be used to run the tool by default. If there is no GPU available in the TCC driver mode, and the CUDA_DEVICE_ORDER environment is left at the default setting, the fastest GPU (with index 0) will be used, unless specified otherwise.

To specify a GPU or to disable it, you can set up the CUDA_DEVICE_ORDER and CUDA_VISIBLE_DEVICES system environment variables by doing the following:

  • To use a different GPU, specify it through the CUDA_VISIBLE_DEVICES system environment variable. To do so, first create this environment variable if it doesn't already exist in your system. Then set its value to the index value (0 for the first one, 1 for the second one, and so on) that represents the GPU device you want to use, and restart the application. The index value depends on the order determined by the CUDA_DEVICE_ORDER system environment variable. To modify the CUDA_DEVICE_ORDER value, follow the same steps as for the CUDA_VISIBLE_DEVICES system environment variable.
  • If you do not want the analysis to use any of the GPU devices installed in your system, set the CUDA_VISIBLE_DEVICES system environment variable to -1 and restart the application. The tool will run using the CPU only.
  • To enable a tool to use a GPU device again, either delete the CUDA_VISIBLE_DEVICES system environment variable or set its value to the index value of the GPU device you want to use, and restart the application.

For more information on the CUDA_VISIBLE_DEVICES and CUDA_DEVICE_ORDER system environment variables, see the CUDA Toolkit Programming Guide.

The following subsections describe the recommended configuration steps for achieving optimal processing when using the GPU capability.

Set the TCC driver mode

For NVIDIA GPUs, set the GPU that is used for computation to the TCC driver, rather than the default WDDM driver. TCC mode allows the GPU to operate more efficiently.

To enable the TCC driver mode, use the NVIDIA System Management Interface control program typically found at C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe. For example, the 'nvidia-smi -dm 1 -i 2' command switches the card with device ID 2 to display mode 1 (TCC).

Note:

If you are running ArcGIS Server, the GPU used for computation must be in the TCC driver mode.

Disable the ECC mode

Disable the Error Correcting Code (ECC) mode for the GPU used for computation, since it reduces the memory available on the GPU.

To disable the ECC mode, use the NVIDIA System Management Interface (nvidia-smi) control program typically found at C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe. For example, the 'nvidia-smi -e 0 -i 1' command disables the ECC mode for the GPU with device ID 1.

Increase the TDR setting

If the GPU used for computation is in the WDDM driver mode, the Windows display device driver can reboot the GPU if any computation takes longer than a couple of seconds. This is known as the Windows timeout detection and recovery (TDR) condition. If this happens, the tool will fail to complete, and a GPU error will be returned.

You can modify, the TdrDelay registry key to avoid this situation. Setting it to an appropriate value (for example, 60 seconds) allows time for a lengthy operation to complete before the TDR condition is triggered. On most Windows systems, the path to the TdrDelay key in the registry is HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\GraphicsDrivers. If the TdrDelay key does not exist, you must create it in this path. When you create or change this registry value, make a backup of the registry first. You must reboot your machine for the change to take effect. For more information, see Timeout detection and recovery (TDR) in the Microsoft developer documentation.

Caution:

Esri is not responsible for any system problem that may occur if the registry was improperly modified. Ensure that you have a valid registry backup to revert to if problems are encountered or have a qualified systems analyst perform the change.

Related topics