Clients want and expect tools to run as fast as possible, so your web tool needs to be fast and efficient. Since ArcGIS Server can accommodate multiple clients at once, inefficient services can overload your server. The more efficient your services, the more clients can be served with the same computing resources.
The following are tips and techniques to increase the performance of your services. In general, techniques are presented in order—those that offer larger performance boosts are presented first. The last few tips can shave a few tenths of a second off your execution time, which may be important in some scenarios.
Use layers for project data
When running a tool prior to sharing it as a web tool, run the tool using layers as input rather than paths to datasets on disk. A layer references a dataset on disk, and layers cache properties about the dataset. This is particularly true for network dataset layers and raster layers. By using a layer instead of the path to the dataset, there is a performance advantage because when the service is started, it creates the layer from the dataset, caches basic properties of the dataset, and keeps the dataset open. When the service executes, the properties of the dataset are immediately available, and the dataset is open and available to be acted upon—a performance boost.
For example, the Viewshed service on the Esri SampleServer and the ArcGIS Network Analyst extension, create drive time polygons that use layers. Depending on the size of the dataset, this can save upwards of 1 to 2 seconds per service execution.
Use data local to ArcGIS Server
The project data required by your web tool should be local to ArcGIS Server. Data that is shared and accessed over a network share (UNC) is slower than if it was available on the same machine. Performance numbers vary widely, but it's not uncommon for reading and writing data across a LAN to take twice as long as a local disk.
Write intermediate data to memory
Write intermediate (scratch) data to the in_memory workspace. Writing data to memory is faster than writing data to disk.
Preprocess data used by your tasks
Most web tools are intended to be focused workflows providing answers to specific spatial queries posed by web clients. Since these workflows tend to be specific operations on known data, there is almost always an opportunity to preprocess data to optimize the operation. For example, adding an attribute or spatial index is a simple preprocess to optimize spatial or attribute selection operations. The following are additional examples:
- The Geoprocessing service example: Watershed tutorial preprocesses hydrologic data by creating a flow accumulation and direction raster.
- You can precompute distances from known locations using the Near or Generate Near Table tools. For example, suppose your service allows clients to select vacant parcels that are a user-defined distance from the Los Angeles River. You could use the Select Layer By Location tool to perform this selection, but it would be much faster to precompute the distance of every parcel from the Los Angeles River (using the Near tool) and store the computed distance as an attribute of the parcels. You would index this attribute using the Add Attribute Index tool. Now when the client issues a query, your task can perform a simple and fast attribute selection on the distance attribute rather than a less efficient spatial query.
Add attribute indexes
If your tool is selecting data using attribute queries, create an attribute index for each attribute used in queries. You can use the Add Attribute Index tool. You only need to create the index once, and you can do it outside of your model or script.
Add spatial indexes
If your model or script does spatial queries on shapefiles, create a spatial index for the shapefile using the Add Spatial Index tool. If you are using geodatabase feature classes, spatial indexes are automatically created and maintained for you. In some circumstances, recalculating a spatial index may improve performance as described in Setting spatial indexes.
Use synchronous rather than asynchronous
You can set your web tool to run synchronously or asynchronously. When run asynchronously, there is overhead incurred by the server, which means that asynchronous tools rarely execute in less than 1 second. Executing the same task synchronously is approximately a tenth of a second faster than executing it asynchronously.
Avoid unneeded coordinate transformations
If your web tool uses datasets in different coordinate systems, the tools may need to transform coordinates into a single common coordinate system during execution. Depending on the size of your datasets, transforming coordinates from one coordinate system to another can add unnecessary overhead. Be aware of the coordinate system of your datasets and whether your tools need to perform coordinate transformations. You may want to transform all datasets used by your tool into a single coordinate system.
Reduce data size
Any software that processes data works faster when the dataset is small. The following are a couple of ways you can reduce the size of your geographic data:
- Remove unnecessary attributes on your project data with the Delete Field tool.
- Line and polygon features have vertices that define their shape. Each vertex is an x,y coordinate. Your features may have more vertices than needed, unnecessarily increasing the size of your dataset.
- If your data comes from an external source, it may contain duplicate vertices or vertices that are so close together that they do not contribute to the definition of the feature.
- The number of vertices does not fit the scale of analysis. For example, your features contain details that are appropriate at large scales, but your analysis or presentation is at a small scale.