Web tool performance tips

Users expect tools to run as fast as possible, so the web tool must be efficient. Since ArcGIS Server can accommodate multiple clients at once, inefficient services can overload your server. The more efficient the services, the more clients can be served with the same computing resources.

The following are tips and techniques to increase the performance of services. In general, techniques are presented in order—those that offer larger performance improvements are presented first. The last few tips can improve the performance time by a few tenths of a second, which may be important in some scenarios.

Use layers for project data

When running a tool prior to sharing it as a web tool, run the tool using layers as input rather than paths to datasets on disk. A layer references a dataset on disk, and layers cache dataset properties. This is particularly true for network dataset layers and raster layers. Using a layer instead of the path to the dataset results in a performance advantage because when the service is started, it creates the layer from the dataset, caches basic dataset properties, and keeps the dataset open. When the service runs, the dataset properties are immediately available, and the dataset is open and available to be acted upon.

For example, the Viewshed service on the Esri SampleServer and the ArcGIS Network Analyst extension create drive time polygons that use layers. Depending on the size of the dataset, this can save more than 1 to 2 seconds per service run.

Use data local to ArcGIS Server

The project data required by the web tool must be local to ArcGIS Server. Data that is shared and accessed over a network share (UNC) is slower than if it was available on the same machine. Performance numbers vary widely, but it's not uncommon for reading and writing data across a LAN to take twice as long as a local disk.

Write intermediate data to memory

Write intermediate (scratch) data to the memory workspace. Writing data to memory is faster than writing data to disk.

Note:

You can also write output data to memory as long as you do not set the option to view outputs as a map image layer.

Learn more about writing data to memory

Preprocess data used by tasks

Most web tools are intended to be focused workflows providing answers to specific spatial queries posed by web clients. Since these workflows tend to be specific operations on known data, there is almost always an opportunity to preprocess data to optimize the operation. For example, adding an attribute or spatial index is a preprocess to optimize spatial or attribute selection operations.

  • You can precompute distances from known locations using the Near or Generate Near Table tools. For example, suppose the service allows clients to select vacant parcels that are a user-defined distance from the Los Angeles River. You could use the Select Layer By Location tool to perform this selection, but it would be much faster to precompute the distance of every parcel from the Los Angeles River (using the Near tool) and store the computed distance as an attribute of the parcels. You would index this attribute using the Add Attribute Index tool. Now when the client issues a query, the task can perform an attribute selection on the distance attribute rather than a less efficient spatial query.

Add attribute indexes

If the tool is selecting data using attribute queries, create an attribute index for each attribute used in queries using the Add Attribute Index tool. You only need to create the index once, and you can do it outside of the model or script.

Add spatial indexes

If the model or script performs spatial queries on shapefiles, create a spatial index for the shapefile using the Add Spatial Index tool. If you are using geodatabase feature classes, spatial indexes are automatically created and maintained. In some circumstances, recalculating a spatial index may improve performance as described in Spatial indexes in the geodatabase.

Use synchronous rather than asynchronous

You can set the web tool to run synchronously or asynchronously. When tools are run asynchronously, there is overhead incurred by the server, which means that asynchronous tools rarely run in less than 1 second. Running the same task synchronously is approximately a tenth of a second faster than running it asynchronously.

Avoid unnecessary coordinate transformations

If the web tool uses datasets in different coordinate systems, the tools may need to transform coordinates into a single common coordinate system when run. Depending on the size of the datasets, transforming coordinates from one coordinate system to another can add unnecessary overhead. Be aware of the coordinate system of the datasets and whether the tools must perform coordinate transformations. You may want to transform all datasets used by the tool into a single coordinate system.

Reduce data size

Any software that processes data works faster when the dataset is small. The following are ways you can reduce the size of geographic data:

  • Remove unnecessary attributes on project data with the Delete Field tool.
  • Line and polygon features have vertices that define their shape. Each vertex is an x,y coordinate. The features may have more vertices than needed, unnecessarily increasing the size of the dataset.
    • If the data is from an external source, it may contain duplicate vertices or vertices that are so close together that they do not contribute to the definition of the feature.
    • The number of vertices does not fit the scale of analysis. For example, the features contain details that are appropriate at large scales, but the analysis or presentation is at a small scale.
    The Simplify Line, Simplify Polygon, and Generalize tools can be used to remove extraneous vertices from the data to the desired level of detail.