Disponible avec une licence Image Analyst.
With the ArcGIS Image Analyst extension, you can perform entire deep learning workflows with imagery in ArcGIS Pro. Use geoprocessing tools to prepare imagery training data, train an object detection, pixel classification, or object classification model, and produce and review results.
Remarque :
This topic describes deep learning for imagery workflows using Image Analyst. For an overview of all the deep learning capabilities in ArcGIS Pro, see Deep learning in ArcGIS Pro.
The workflow is represented in the diagram below.
Step | Description |
---|---|
Create training samples in the Label Objects for Deep Learning pane, and use the Export Training Data For Deep Learning tool to convert the samples into deep learning training data. Remarque :The Export Training Data For Deep Learning tool is also supported with the Spatial Analyst extension. | |
Use the Train Deep Learning Model tool to train a model using PyTorch, or train the model outside of ArcGIS Pro using a supported third-party deep learning framework. | |
Use the trained model to run the Detect Objects Using Deep Learning tool, the Classify Pixels Using Deep Learning tool, or the Classify Objects Using Deep Learning tool to generate an output. Review and validate results using the attributes pane, and compute accuracy using the Compute Accuracy For Object Detection tool. |
Features and capabilities
Deep learning tools in ArcGIS Pro allow you to use more than the standard machine learning classification techniques.
- Use convolutional neural networks or deep learning models to detect objects, classify objects, or classify image pixels.
- Integrate external deep learning model frameworks, such as TensorFlow, PyTorch, and Keras.
- Use a model definition file multiple times to detect change over time or detect objects in different areas of interest.
- Generate a polygon feature class showing the location of detected objects to be used for additional analysis or workflows.
- Take advantage of GPU processing as well as using CPU for distributed processing.
Get started with deep learning
The creation and export of training samples are done in ArcGIS Pro using the standard training sample generation tools. The deep learning model can be trained with the PyTorch framework using the Train Deep Learning Model tool, or it can be trained outside of ArcGIS Pro using another deep learning framework. Once the model is trained, use an Esri model definition file (.emd) to run geoprocessing tools to detect or classify features in your imagery.
You must install the deep learning framework Python packages; otherwise, an error occurs when you add the Esri model definition file to the deep learning geoprocessing tools. For information about how to install these packages, see Install deep learning frameworks for ArcGIS.
- Create and export training samples.
- Use the Label Objects for Deep Learning pane or the Training Samples Manager to select or create a classification schema.
- Create training site samples for the class categories or features of interest. Save the training sample file.
- Run the Export Training Data for Deep Learning geoprocessing tool to convert the source imagery and training samples to deep learning training data. The source imagery can be an image service, a mosaic dataset, a raster dataset, or a folder of rasters. The output of the tool is image chips or samples containing training sites to be used to train the deep learning model. An additional output of the tool is a template .emd file that must be populated.
- Train the deep learning model.
- Use the Train Deep Learning Model tool to train a deep learning model using the image chips generated in the previous step.
- Run the inference geoprocessing tools in ArcGIS Pro.
- Use the Detect Objects Using Deep Learning, Classify
Pixels Using Deep Learning, or Classify Objects Using Deep Learning geoprocessing tool to process your imagery. If the trained model incorporated custom Python raster functions with additional variables, such as padding or a confidence threshold for fine-tuning the sensitivity, these variables appear on the geoprocessing tool dialog box for user input. The data type, such as string or numeric, is specified in the Python raster function. Ideally, additional inference parameters should be limited to two.
La valeur du paramètre Définition du modèle Esri peut être un fichier JSON de définition de modèle Esri (.emd) ou une chaîne JSON. Une chaîne JSON est utile lorsque cet outil est utilisé sur le serveur de manière à pouvoir coller la chaîne JSON au lieu de télécharger le fichier .emd.
The output of the Detect Objects Using Deep Learning tool is a feature class showing the objects detected by the model. The Classify Pixels Using Deep Learning tool outputs a classified raster. The Classify Objects Using Deep Learning tool requires a feature class and imagery as the input datasets, and the result is a feature class in which each object within each feature is labelled with a class name.
- Use the Detect Objects Using Deep Learning, Classify
Pixels Using Deep Learning, or Classify Objects Using Deep Learning geoprocessing tool to process your imagery. If the trained model incorporated custom Python raster functions with additional variables, such as padding or a confidence threshold for fine-tuning the sensitivity, these variables appear on the geoprocessing tool dialog box for user input. The data type, such as string or numeric, is specified in the Python raster function. Ideally, additional inference parameters should be limited to two.
After using a deep learning model, it's important that you review the results and assess the accuracy of the model. Use the Attributes pane to review the results from your object-based inferencing (Classify Objects Using Deep Learning tool or Detect Objects Using Deep Learning tool). You can also use the Compute Accuracy For Object Detection tool to generate a table and report for accuracy assessment.
To learn about the basics of deep learning applications with computer vision, see Introduction to deep learning.
For information about requirements for running the geoprocessing tools, and issues you may encounter, see Deep learning frequently asked questions.
Esri model definition file
The .emd file is a JSON file that describes the trained deep learning model. It contains model definition parameters that are required to run the inference tools, and it should be modified by the data scientist who trained the model. There are required and optional parameters in the file as described in the table below.
Once the .emd file is completed and verified, it can be used in inferencing multiple times, as long as the input imagery is from the same sensor as the original model input, and the classes or objects being detected are the same. For example, an .emd file that was defined with a model to detect oil well pads using Sentinel-2 satellite imagery can be used to detect oil well pads across multiple areas of interest and multiple dates using Sentinel-2 imagery.
Some parameters are used by all the inference tools; these are listed in the table below. Some parameters are only used with specific tools, such as the CropSizeFixed and the BlackenAroundFeature parameters, which are only used by the Classify Objects Using Deep Learning tool.
Model definition file parameter | Explanation |
---|---|
Framework | The name of a deep learning framework used to train your model. The following deep learning frameworks are supported:
|
ModelConfiguration | The name of the model configuration. The model configuration defines the model inputs and outputs, the inferencing logic, and the assumptions made about the model inputs and outputs. Existing open source deep learning workflows define standard input and output configuration and inferencing logic. ArcGIS supports the following set of predefined configurations: TensorFlow
Keras
If you used one of the predefined configurations, type the name of the configuration in the .emd file. If you trained your deep learning model using a custom configuration, you must describe the inputs and outputs in full in the .emd file or in the custom Python file. |
ModelType | The type of model.
|
ModelFile | The path to a trained deep learning model file. The file format depends on the model framework. For example, in TensorFlow, the model file is a .pb file. |
Description | Provide information about the model. Model information can include anything to describe the model you have trained. Examples include the model number and name, time of model creation, performance accuracy, and more. |
InferenceFunction (Optional) | The path of the inference function. An inference function understands the trained model data file and provides the inferencing logic. There are six inference functions that are supported in the ArcGIS Pro deep learning geoprocessing tools:
|
SensorName (Optional) | The name of the sensor used to collect the imagery from which training samples were generated. |
RasterCount (Optional) | The number of rasters used to generate the training samples. |
BandList (Optional) | The list of bands used in the source imagery |
ImageHeight (Optional) | The number of rows in the image being classified or processed. |
ImageWidth (Optional) | The number of columns in the image being classified or processed. |
ExtractBands (Optional) | The band indexes or band names to extract from the input imagery. |
Classes (Optional) | Information about the output class categories or objects. |
DataRange (Optional) | The range of data values if scaling or normalization was done in preprocessing. |
ModelPadding (Optional) | The amount of padding to add to the input imagery for inferencing. |
BatchSize (Optional) | The number of training samples to be used in each iteration of the model. |
PerProcessGPUMemoryFraction (Optional) | The fraction of GPU memory to allocate for each iteration in the model. The default is 0.95, or 95 percent. |
MetaDataMode (Optional) | The format of the metadata labels used for the image chips. |
ImageSpaceUsed (Optional) | The type of reference system used to train the model.
|
WellKnownBandNames (Optional) | The names given to each input band, in order of band index. Bands can then be referenced by these names in other tools. |
AllTileStats | The statistics of each band in the training data. |
The following is an example of a model definition file (.emd) that uses a standard model configuration:
{
"Framework": "TensorFlow",
"ModelConfiguration": "ObjectDetectionAPI",
"ModelFile":"C:\\ModelFolder\\ObjectDetection\\tree_detection.pb",
"ModelType":"ObjectionDetection",
"ImageHeight":850,
"ImageWidth":850,
"ExtractBands":[0,1,2],
"Classes" : [
{
"Value": 0,
"Name": "Tree",
"Color": [0, 255, 0]
}
]
}
The following is an example of a model definition file (.emd) with more optional parameters in the configuration:
{
"Framework": "PyTorch",
"ModelConfiguration": "FasterRCNN",
"ModelFile":"C:\\ModelFolder\\ObjectDetection\\river_detection.pb",
"ModelType":"ObjectionDetection",
"Description":"This is a river detection model for imagery",
"ImageHeight":448,
"ImageWidth":448,
"ExtractBands":[0,1,2,3],
"DataRange":[0.1, 1.0],
"ModelPadding":64,
"BatchSize":8,
"PerProcessGPUMemoryFraction":0.8,
"MetaDataMode" : "PASCAL_VOC_rectangles",
"ImageSpaceUsed" : "MAP_SPACE",
"Classes" : [
{
"Value": 1,
"Name": "River",
"Color": [0, 255, 0]
}
],
"InputRastersProps" : {
"RasterCount" : 1,
"SensorName" : "Landsat 8",
"BandNames" : [
"Red",
"Green",
"Blue",
"NearInfrared"
]
},
"AllTilesStats" : [
{
"BandName" : "Red",
"Min" : 1,
"Max" : 60419,
"Mean" : 7669.720049855654,
"StdDev" : 1512.7546387966217
},
{
"BandName" : "Green",
"Min" : 1,
"Max" : 50452,
"Mean" : 8771.2498195125681,
"StdDev" : 1429.1063589515179
},
{
"BandName" : "Blue",
"Min" : 1,
"Max" : 47305,
"Mean" : 9306.0475897744163,
"StdDev" : 1429.380049936676
},
{
"BandName" : "NearInfrared",
"Min" : 1,
"Max" : 60185,
"Mean" : 17881.499184561973,
"StdDev" : 5550.4055277121679
}
],
}
Deep learning model package
Un paquetage de modèle d’apprentissage profond (.dlpk) contient les fichiers et données nécessaires à l’exécution des outils d’inférence d’apprentissage profond pour la détection des objets ou la classification des images. Il est possible de charger le paquetage sur le portail comme élément DLPK et de l’utiliser comme entrée des outils d’analyse raster pour apprentissage profond.
Les paquetages de modèle d’apprentissage profond doivent contenir un fichier de définition de modèle Esri (.emd) et un fichier de modèle entraîné. L’extension du fichier de modèle entraîné est fonction de la structure d’entraînement utilisée pour le modèle. Par exemple, un modèle entraîné avec TensorFlow génère un fichier portant l’extension .pb, alors qu’un modèle entraîné avec Keras produit un fichier portant l’extension .h5. Selon la structure et les options utilisées pour entraîner le modèle, il peut être nécessaire d’ajouter une fonction raster Python (.py) ou des fichiers supplémentaires. Il est possible d’inclure plusieurs fichiers de modèle entraîné dans un seul et même paquetage de modèle d’apprentissage profond.
Vous pouvez ouvrir la plupart des paquetages dans n’importe quelle version de ArcGIS Pro. Par défaut, le contenu d’un paquetage est stocké dans le dossier <User Documents>\ArcGIS\Packages. Vous pouvez changer cet emplacement dans les options de partage et de téléchargement. Les fonctionnalités du paquetage qui ne sont pas prises en charge par la version de ArcGIS Pro utilisée pour consommer le paquetage ne sont pas disponibles.
To view or edit the properties of a .dlpk package, or to add or remove files from your .dlpk package, right-click the .dlpk package in the Catalog pane and click Properties.
Properties include the following information:
- Input—The .emd file, trained model file, and any additional files that may be required to run the inferencing tools.
- Framework—The deep learning framework used to train the model.
- ModelConfiguration—The type of model training performed (object detection, pixel classification, or feature classification).
- Description—A description of the model. This is optional and editable.
- Summary—A brief summary of the model. This is optional and editable.
- Tags—Any tags used to identify the package. This is useful for .dlpk package items stored on your portal.
Any property that is edited in the Properties window is updated when you click OK. If the .dlpk package item is being accessed from your portal in the Catalog pane, the portal item is updated.
For information about how to create a .dlpk package, see Share a deep learning model package.
Developer resources
In addition to the geoprocessing tools and workflows available in ArcGIS Pro, you can also perform deep learning tasks in scripts and notebooks. If you are working in ArcGIS REST API, use the deep learning tasks available with the raster analysis service. These tasks are equivalent to the available geoprocessing tools but allow for distributed processing depending on your processing configuration.
If you are working in ArcGIS API for Python, there are many additional deep learning tasks available with the arcgis.learn module .
Rubriques connexes
- Introduction à l’apprentissage profond
- Deep Learning dans ArcGIS Pro
- Installer des structures d’apprentissage profond pour ArcGIS
- Review results from deep learning
- Train Deep Learning Model
- Classify Pixels Using Deep Learning
- Detect Objects Using Deep Learning
- Non Maximum Suppression
- Exporter les données d’apprentissage pour l'apprentissage en profondeur
Vous avez un commentaire à formuler concernant cette rubrique ?