Compute Accuracy For Object Detection (Image Analyst)

Available with Image Analyst license.

Summary

Calculates the accuracy of a deep learning model by comparing the detected objects from the Detect Objects Using Deep Learning tool to ground truth data.

Learn more about how Compute Accuracy For Object Detection works.

Usage

  • This tool generates a table containing information regarding the accuracy of the output from the Detect Objects Using Deep Learning tool.

    The table contains accuracy metrics for each class in the detected data, as well as a row for all classes (overall accuracy). The table contains the following fields:

    • Precision—The ratio of the number of true positives to the total number of predictions.
    • Recall—The ratio of the number of true positives to the total number of positive predictions.
    • F1_Score—The weighted average of the precision and recall. Values range from 0 to 1, where 1 means highest accuracy.
    • AP—The Average Precision (AP) metric, which is the precision averaged across all recall values between 0 and 1 at a given Intersection over Union (IoU) value.
    • True_Positive—The number of true positives generated by the model.
    • False_Positive—The number of false positives generated by the model.
    • False_Negative—The number of false negatives generated by the model.

    For more information about the metrics provided in the output table and in the accuracy report, see How Compute Accuracy For Object Detection works.

  • The input ground reference data must contain polygons. If you have point or line data indicating the location of objects, use the Buffer tool to generate a polygon feature class before running this tool.

  • The Intersection over Union (IoU) ratio is used as a threshold for determining whether a predicted outcome is a true positive or a false positive. IoU is the amount of overlap between the bounding box around a predicted object and the bounding box around the ground reference data.

    The IoU ratio is the overlap of bounding boxes over the union of bounding boxes for predicted and ground reference features.

    1

    The intersecting area of the predicted bounding box and the ground reference bounding box

    2

    The total area of the predicted bounding box and ground reference bounding box combined

Syntax

ComputeAccuracyForObjectDetection(detected_features, ground_truth_features, out_accuracy_table, {out_accuracy_report}, {detected_class_value_field}, {ground_truth_class_value_field}, {min_iou}, {mask_features})
ParameterExplanationData Type
detected_features

The polygon feature class containing the objects detected from the Detect Objects Using Deep Learning tool.

Feature Class; Feature Layer
ground_truth_features

The polygon feature class containing ground truth data.

Feature Class; Feature Layer
out_accuracy_table

The output accuracy table.

Table
out_accuracy_report
(Optional)

The name of the output accuracy report. The report is a PDF document containing accuracy metrics and charts.

File
detected_class_value_field
(Optional)

The field in the detected objects feature class that contains the class values or class names.

If a field name is not specified, a Classvalue or Value field will be used. If these fields do not exist, all records will be identified as belonging to one class.

The class values or class names must match those in the ground reference feature class exactly.

Field
ground_truth_class_value_field
(Optional)

The field in the ground truth feature class that contains the class values.

If a field name is not specified, a Classvalue or Value field will be used. If these fields do not exist, all records will be identified as belonging to one class.

The class values or class names must match those in the detected objects feature class exactly.

Field
min_iou
(Optional)

The IoU ratio to use as a threshold to evaluate the accuracy of the object-detection model. The numerator is the area of overlap between the predicted bounding box and the ground reference bounding box. The denominator is the area of union or the area encompassed by both bounding boxes. The IoU ranges from 0 to 1.

Double
mask_features
(Optional)

A polygon feature class that delineates the area or areas where accuracy will be computed. Only the features that intersect the mask will be assessed for accuracy.

Feature Class; Feature Layer

Code sample

ComputeAccuracyForObjectDetection example 1 (Python window)

This example generates an accuracy table for a specified minimum IoU value.

# Import system modules
import arcpy
from arcpy.ia import *

# Check out the ArcGIS Image Analyst extension license
arcpy.CheckOutExtension("ImageAnalyst")

# Execute 
ComputeAccuracyForObjectDetection(
	"C:/DeepLearning/Data.gdb/detectedFeatures", 
	"C:/DeepLearning/Data.gdb/groundTruth", 
	"C:/DeepLearning/Data.gdb/accuracyTable", 
	"E:/DeepLearning/accuracyReport.pdf", "Class", 
	"Class", 0.5, " C:/DeepLearning/Data.gdb/AOI")
ComputeAccuracyForObjectDetection example 2 (stand-alone script)

This example generates an accuracy table for a specified minimum IoU value.

# Import system modules
import arcpy
from arcpy.ia import *

# Check out the ArcGIS Image Analyst extension license
arcpy.CheckOutExtension("ImageAnalyst")

# Set local variables 
detected_features = "C:/DeepLearning/Data.gdb/detectedFeatures" 
ground_truth_features = "C:/DeepLearning/Data.gdb/groundTruth" 
out_accuracy_table = "C:/DeepLearning/Data.gdb/accuracyTable" 
out_accuracy_report = "C:/DeepLearning/accuracyReport.pdf" 
detected_class_value_field = "Class" 
ground_truth_class_value_field = "Class" 
min_iou = 0.5 
mask_features = "C:/DeepLearning/Data.gdb/AOI" 

# Execute 
ComputeAccuracyForObjectDetection(detected_features, 
	ground_truth_features, out_accuracy_table, 
	out_accuracy_report, detected_class_value_field, 
	ground_truth_class_value_field, min_iou, mask_features)

Licensing information

  • Basic: Requires Image Analyst
  • Standard: Requires Image Analyst
  • Advanced: Requires Image Analyst

Related topics