Skip To Content

Accuracy Assessment

Assess the accuracy of your classification

Accuracy Assessment uses a Reference Dataset to determine the accuracy of your classified result. The values of your reference dataset need to match the schema. Reference data can be in several different formats:

  • A raster dataset that is a classified image
  • A polygon feature class or a shapefile. The format of the feature class attribute table needs to match the training samples. To ensure this, you can create the reference dataset using the Training Samples Manager to read and write out the dataset.
  • A point feature class or a shapefile. The format needs to match the output of the Create Accuracy Assessment Points tool. If you are using an existing file and want to convert it to the appropriate format, use the Create Accuracy Assessment Points geoprocessing tool.

Number of Random Points

The total number of random points that will be generated. The actual number may exceed but never fall below this number, depending on sampling strategy and number of classes. The default number of randomly generated points is 500.

Sampling Strategy

Specify a sampling scheme to use:

  • Stratified Random—Create points that are randomly distributed within each class, where each class has a number of points proportional to its relative area. This is the default.
  • Equalized Stratified Random—Create points that are randomly distributed within each class, where each class has the same number of points.
  • Random—Create points that are randomly distributed throughout the image.

Understand your results

Once you run the tool, you will see a graphical representation of your confusion matrix. Hover over a cell to see the Count, User Accuracy, Producer Accuracy, and FScore. The Kappa score is also displayed at the bottom of the pane. The output table will be added to the Contents pane.

Analyze the diagonal

Accuracy is represented from 0 - 1, with 1 being 100 percent accuracy. The colors range from light to dark blue, with darker meaning higher accuracy.

Unlike the diagonal, the cells that are off the diagonal show error based on omission and commission. Errors of omission show false positives, where pixels are incorrectly classified as a known class when they should have been classified as something else. An example would be where the classified image says a pixel is impervious but the ground truth says it is forest. The impervious class has extra pixels that it should not have according to the ground truth data. Errors of commission are false negatives, where pixels of a known class are classified as something other than that class. An example would be where the classified image says a pixel is forest, but it is actually impervious. In this case, the impervious class is missing pixels according to the ground truth data. Errors of omission are also known as user's accuracy or type 1 error. Errors of commission are also known as producer's accuracy or type 2 error.

Accuracy Assessment result

Related topics