Performance tips

Users expect tools to run as quickly as possible, so your web tool and geoprocessing service must be efficient. Since ArcGIS Server can accommodate multiple clients at once, inefficient services can overload server resources. The more efficient the services, the more clients can be served with the same computing resources.

Use layers for project data

When running a tool before sharing it as a web tool, run the tool using layers as input rather than paths to datasets on disk. A layer references a dataset on disk, and layers cache dataset properties. This is particularly true for network dataset layers and raster layers. Using a layer instead of the path to the dataset results in a performance advantage because when the service is started, it creates the layer from the dataset, caches basic dataset properties, and keeps the dataset open. When the service runs, the dataset properties are immediately available, and the dataset is open and available to be acted upon.

For example, the ArcGIS Network Analyst extension creates drive time polygons that use layers. Depending on the size of the dataset, this can save more than 1 to 2 seconds per service run.

Use data local to ArcGIS Server

The project data required by the web tool must be local to ArcGIS Server. Data that is shared and accessed over a network share (UNC) is slower than if it was available on the same machine. Performance numbers vary widely, but it's common for reading and writing data across a LAN to take twice as long as a local disk.

Write intermediate data to memory

Write intermediate data to the memory workspace. Writing data to memory is faster than writing data to disk.

Note:

Output data can be written to memory as long as the option to view outputs as a map image layer is not set.

Learn more about writing data to memory

Preprocess data used by tasks

Most web tools are intended to be focused workflows providing answers to specific spatial queries posed by web clients. Since these workflows tend to be specific operations on known data, there is almost always an opportunity to preprocess data to optimize the operation. For example, adding an attribute or spatial index is a preprocess to optimize spatial or attribute selection operations.

Distances from known locations can be computed using the Near or Generate Near Table tools. For example, suppose the service allows clients to select vacant parcels that are a user-specified distance from the Los Angeles River. You could use the Select Layer By Location tool to perform this selection, but it would be much faster to precompute the distance of every parcel from the Los Angeles River (using the Near tool) and store the computed distance as an attribute of the parcels. You would index this attribute using the Add Attribute Index tool. Now when the client issues a query, the task can perform an attribute selection on the distance attribute rather than a less efficient spatial query.

Add attribute indexes

If the tool is selecting data using attribute queries, create an attribute index for each attribute used in queries using the Add Attribute Index tool. The index only needs to be created once, and it can be done outside of the model or script.

Add spatial indexes

If the model or script performs spatial queries on shapefiles, create a spatial index for the shapefile using the Add Spatial Index tool. For geodatabase feature classes, spatial indexes are automatically created and maintained. In some circumstances, recalculating a spatial index may improve performance, as described in Spatial indexes in the geodatabase.

Use synchronous rather than asynchronous for short running tools

A web tool can be set to run synchronously or asynchronously. When a tool is run asynchronously, there is overhead incurred by the server. Running the same task synchronously is always faster than running it asynchronously. However, for long running tasks, the difference become marginal, and synchronous services cannot provide the stats and messages while the tool is running, which can result in a poorer experience.

Avoid unnecessary coordinate transformations

If the web tool uses datasets in different coordinate systems, the tools may need to transform the datasets into a single common coordinate system when run. Depending on the size of the datasets, a transformation from one coordinate system to another can add unnecessary overhead. Be aware of the coordinate system of the datasets and whether the tools must perform coordinate transformations.

Reduce data size

Any software that processes data works faster when the dataset is small. The following are ways to reduce the size of geographic data:

  • Remove unnecessary attributes on project data using the Delete Field tool.
  • Line and polygon features have vertices that define their shape. The features may have more vertices than needed, unnecessarily increasing the size of the dataset.
    • The data may contain duplicate vertices or vertices that are so close together that they do not contribute to the definition of the feature.
    • The number of vertices does not fit the scale of analysis. For example, the features contain details that are appropriate at large scales, but the analysis or presentation is at a small scale.
    The Simplify Line, Simplify Polygon, and Generalize tools can be used to remove extraneous vertices from the data to the desired level of detail.