Deep learning using the ArcGIS Image Analyst extension

Disponible con licencia de Image Analyst.

With the ArcGIS Image Analyst extension, you can perform entire deep learning workflows with imagery in ArcGIS Pro. Use geoprocessing tools to prepare imagery training data, train an object detection, pixel classification, or object classification model, and produce and review results.


This topic describes deep learning for imagery workflows using Image Analyst. For an overview of all the deep learning capabilities in ArcGIS Pro, see Deep learning in ArcGIS Pro.

The workflow is represented in the diagram below.

Deep learning workflow


Deep learning step 1

Create training samples in the Label Objects for Deep Learning pane, and use the Export Training Data For Deep Learning tool to convert the samples into deep learning training data.


The Export Training Data For Deep Learning tool is also supported with the Spatial Analyst extension.

Deep learning step 2

Use the Train Deep Learning Model tool to train a model using PyTorch, or train the model outside of ArcGIS Pro using a supported third-party deep learning framework.

Deep learning step 3

Use the trained model to run the Detect Objects Using Deep Learning tool, the Classify Pixels Using Deep Learning tool, or the Classify Objects Using Deep Learning tool to generate an output.

Review and validate results using the attributes pane, and compute accuracy using the Compute Accuracy For Object Detection tool.

Features and capabilities

Deep learning tools in ArcGIS Pro allow you to use more than the standard machine learning classification techniques.

  • Use convolutional neural networks or deep learning models to detect objects, classify objects, or classify image pixels.
  • Integrate external deep learning model frameworks, such as TensorFlow, PyTorch, and Keras.
  • Use a model definition file multiple times to detect change over time or detect objects in different areas of interest.
  • Generate a polygon feature class showing the location of detected objects to be used for additional analysis or workflows.
  • Take advantage of GPU processing as well as using CPU for distributed processing.

Get started with deep learning

The creation and export of training samples are done in ArcGIS Pro using the standard training sample generation tools. The deep learning model can be trained with the PyTorch framework using the Train Deep Learning Model tool, or it can be trained outside of ArcGIS Pro using another deep learning framework. Once the model is trained, use an Esri model definition file (.emd) to run geoprocessing tools to detect or classify features in your imagery.

You must install the deep learning framework Python packages; otherwise, an error occurs when you add the Esri model definition file to the deep learning geoprocessing tools. For information about how to install these packages, see Install deep learning frameworks for ArcGIS.

  1. Create and export training samples.
    1. Use the Label Objects for Deep Learning pane or the Training Samples Manager to select or create a classification schema.
    2. Create training site samples for the class categories or features of interest. Save the training sample file.
    3. Run the Export Training Data for Deep Learning geoprocessing tool to convert the source imagery and training samples to deep learning training data. The source imagery can be an image service, a mosaic dataset, a raster dataset, or a folder of rasters. The output of the tool is image chips or samples containing training sites to be used to train the deep learning model. An additional output of the tool is a template .emd file that must be populated.
  2. Train the deep learning model.
    1. Use the Train Deep Learning Model tool to train a deep learning model using the image chips generated in the previous step.
  3. Run the inference geoprocessing tools in ArcGIS Pro.
    1. Use the Detect Objects Using Deep Learning, Classify Pixels Using Deep Learning, or Classify Objects Using Deep Learning geoprocessing tool to process your imagery. If the trained model incorporated custom Python raster functions with additional variables, such as padding or a confidence threshold for fine-tuning the sensitivity, these variables appear on the geoprocessing tool dialog box for user input. The data type, such as string or numeric, is specified in the Python raster function. Ideally, additional inference parameters should be limited to two.

      El valor del parámetro definición de modelo de Esri puede ser un archivo JSON de definición de modelo de Esri (.emd) o una cadena de caracteres JSON. Una cadena de caracteres JSON es útil cuando esta herramienta se utiliza en el servidor para pegar la cadena de caracteres JSON, en lugar de cargar el archivo .emd.

      The output of the Detect Objects Using Deep Learning tool is a feature class showing the objects detected by the model. The Classify Pixels Using Deep Learning tool outputs a classified raster. The Classify Objects Using Deep Learning tool requires a feature class and imagery as the input datasets, and the result is a feature class in which each object within each feature is labelled with a class name.

After using a deep learning model, it's important that you review the results and assess the accuracy of the model. Use the Attributes pane to review the results from your object-based inferencing (Classify Objects Using Deep Learning tool or Detect Objects Using Deep Learning tool). You can also use the Compute Accuracy For Object Detection tool to generate a table and report for accuracy assessment.

To learn about the basics of deep learning applications with computer vision, see Introduction to deep learning.

For information about requirements for running the geoprocessing tools, and issues you may encounter, see Deep learning frequently asked questions.

Esri model definition file

The .emd file is a JSON file that describes the trained deep learning model. It contains model definition parameters that are required to run the inference tools, and it should be modified by the data scientist who trained the model. There are required and optional parameters in the file as described in the table below.

Once the .emd file is completed and verified, it can be used in inferencing multiple times, as long as the input imagery is from the same sensor as the original model input, and the classes or objects being detected are the same. For example, an .emd file that was defined with a model to detect oil well pads using Sentinel-2 satellite imagery can be used to detect oil well pads across multiple areas of interest and multiple dates using Sentinel-2 imagery.

Some parameters are used by all the inference tools; these are listed in the table below. Some parameters are only used with specific tools, such as the CropSizeFixed and the BlackenAroundFeature parameters, which are only used by the Classify Objects Using Deep Learning tool.

Model definition file parameterExplanation


The name of a deep learning framework used to train your model.

The following deep learning frameworks are supported:

  • TensorFlow
  • Keras
  • PyTorch
If your model is trained using a deep learning framework that is not listed, a custom inference function (a Python module) is required with the trained model, and you must set InferenceFunction to the Python module path.


The name of the model configuration.

The model configuration defines the model inputs and outputs, the inferencing logic, and the assumptions made about the model inputs and outputs. Existing open source deep learning workflows define standard input and output configuration and inferencing logic. ArcGIS supports the following set of predefined configurations:


  • ObjectDetectionAPI
  • DeepLab


  • MaskRCNN

If you used one of the predefined configurations, type the name of the configuration in the .emd file. If you trained your deep learning model using a custom configuration, you must describe the inputs and outputs in full in the .emd file or in the custom Python file.


The type of model.

  • ImageClassification—For classifying pixels
  • ObjectDetection—For detecting objects or features
  • ObjectClassification—For classifying objects and features


The path to a trained deep learning model file. The file format depends on the model framework. For example, in TensorFlow, the model file is a .pb file.


Provide information about the model. Model information can include anything to describe the model you have trained. Examples include the model number and name, time of model creation, performance accuracy, and more.



The path of the inference function.

An inference function understands the trained model data file and provides the inferencing logic. There are six inference functions that are supported in the ArcGIS Pro deep learning geoprocessing tools:

  • Detect Objects for TensorFlow
  • Classify Pixels for Tensor Flow
  • Detect Objects for Keras
  • Detect Objects for PyTorch
  • Classify Objects for Pytorch
If you used one of the inference functions above, there is no need to specify it in the .emd file. If your model is trained using a deep learning model configuration that is not yet supported, or it requires special inferencing logic, a custom inference function (a Python module) is required with the trained model. In this case, set InferenceFunction to the Python module path. An inference Python module file can be located anywhere ArcGIS Pro can access.



The name of the sensor used to collect the imagery from which training samples were generated.



The number of rasters used to generate the training samples.



The list of bands used in the source imagery



The number of rows in the image being classified or processed.



The number of columns in the image being classified or processed.



The band indexes or band names to extract from the input imagery.



Information about the output class categories or objects.



The range of data values if scaling or normalization was done in preprocessing.



The amount of padding to add to the input imagery for inferencing.



The number of training samples to be used in each iteration of the model.



The fraction of GPU memory to allocate for each iteration in the model. The default is 0.95, or 95 percent.



The format of the metadata labels used for the image chips.



The type of reference system used to train the model.




The names given to each input band, in order of band index. Bands can then be referenced by these names in other tools.


The statistics of each band in the training data.

The following is an example of a model definition file (.emd) that uses a standard model configuration:

    "Framework": "TensorFlow",
    "ModelConfiguration": "ObjectDetectionAPI",
    "Classes" : [
        "Value": 0,
        "Name": "Tree",
        "Color": [0, 255, 0]

The following is an example of a model definition file (.emd) with more optional parameters in the configuration:

    "Framework": "PyTorch",
    "ModelConfiguration": "FasterRCNN",
				"Description":"This is a river detection model for  imagery",
				"DataRange":[0.1, 1.0],
				"MetaDataMode" : "PASCAL_VOC_rectangles",
				"ImageSpaceUsed" : "MAP_SPACE",
    "Classes" : [
        "Value": 1,
        "Name": "River",
        "Color": [0, 255, 0]
				"InputRastersProps" : {
						"RasterCount" : 1,
						"SensorName" : "Landsat 8",
						"BandNames" : [
				"AllTilesStats" : [
      		"BandName" : "Red",
      		"Min" : 1,
      		"Max" : 60419,
      		"Mean" : 7669.720049855654,
      		"StdDev" : 1512.7546387966217
      		"BandName" : "Green",
      		"Min" : 1,
      		"Max" : 50452,
      		"Mean" : 8771.2498195125681,
      		"StdDev" : 1429.1063589515179
      		"BandName" : "Blue",
      		"Min" : 1,
      		"Max" : 47305,
      		"Mean" : 9306.0475897744163,
      		"StdDev" : 1429.380049936676
      		"BandName" : "NearInfrared",
      		"Min" : 1,
      		"Max" : 60185,
      		"Mean" : 17881.499184561973,
      		"StdDev" : 5550.4055277121679

Deep learning model package

Un paquete de modelo de aprendizaje profundo (.dlpk) contiene los archivos y datos necesarios para ejecutar herramientas de inferencia de aprendizaje profundo para la detección de objetos o la clasificación de imágenes. El paquete se puede cargar en su portal como un elemento DLPK y utilizarse como entrada para las herramientas de análisis de ráster de aprendizaje profundo.

Los paquetes de modelos de aprendizaje profundo deben contener un archivo de definición de modelo de Esri (.emd) y un archivo de modelo entrenado. La extensión de archivo del modelo entrenado depende del marco que se usó para entrenar al modelo. Por ejemplo, si entrenó su modelo mediante TensorFlow, el archivo del modelo es un archivo .pb, mientras que un modelo entrenado con Keras genera un archivo .h5. En función del marco del modelo y de las opciones que haya usado para entrenar el modelo, es posible que necesite incluir una función ráster de Python (.py) o archivos adicionales. Puede incluir varios archivos de modelo entrenados en un mismo paquete de modelo de aprendizaje profundo.

La mayoría de los paquetes se pueden abrir en cualquier versión de ArcGIS Pro. De forma predeterminada, el contenido de un paquete se almacena en la carpeta <User Documents>\ArcGIS\Packages. Puede cambiar esta ubicación en las Opciones de descarga y uso compartido. La funcionalidad del paquete que no sea compatible con la versión de ArcGIS Pro que se utiliza para consumir el paquete no estará disponible.

To view or edit the properties of a .dlpk package, or to add or remove files from your .dlpk package, right-click the .dlpk package in the Catalog pane and click Properties.

Open deep learning package in Catalog pane

Properties include the following information:

  • Input—The .emd file, trained model file, and any additional files that may be required to run the inferencing tools.
  • Framework—The deep learning framework used to train the model.
  • ModelConfiguration—The type of model training performed (object detection, pixel classification, or feature classification).
  • Description—A description of the model. This is optional and editable.
  • Summary—A brief summary of the model. This is optional and editable.
  • Tags—Any tags used to identify the package. This is useful for .dlpk package items stored on your portal.

Deep learning package properties

Any property that is edited in the Properties window is updated when you click OK. If the .dlpk package item is being accessed from your portal in the Catalog pane, the portal item is updated.

For information about how to create a .dlpk package, see Share a deep learning model package.

Developer resources

In addition to the geoprocessing tools and workflows available in ArcGIS Pro, you can also perform deep learning tasks in scripts and notebooks. If you are working in API REST de ArcGIS, use the deep learning tasks available with the raster analysis serviceRaster Analysis service tasks. These tasks are equivalent to the available geoprocessing tools but allow for distributed processing depending on your processing configuration.

If you are working in ArcGIS API for Python, there are many additional deep learning tasks available with the arcgis.learn module ArcGIS for Python API website.

Temas relacionados