Disponible con una licencia de Spatial Analyst.
Disponible con licencia de Image Analyst.
Assess the accuracy of your classification
Accuracy Assessment uses a Reference Dataset to determine the accuracy of your classified result. The values of your reference dataset need to match the schema. Reference data can be in several different formats:
- A raster dataset that is a classified image
- A polygon feature class or a shapefile. The format of the feature class attribute table needs to match the training samples. To ensure this, you can create the reference dataset using the Training Samples Manager to read and write out the dataset.
- A point feature class or a shapefile. The format needs to match the output of the Create Accuracy Assessment Points tool. If you are using an existing file and want to convert it to the appropriate format, use the Create Accuracy Assessment Points geoprocessing tool.
Number of Random Points
The total number of random points that will be generated. The actual number may exceed but never fall below this number, depending on sampling strategy and number of classes. The default number of randomly generated points is 500.
Specify a sampling scheme to use:
- Stratified Random—Create points that are randomly distributed within each class, where each class has a number of points proportional to its relative area. This is the default.
- Equalized Stratified Random—Create points that are randomly distributed within each class, where each class has the same number of points.
- Random—Create points that are randomly distributed throughout the image.
Understand your results
Once you run the tool, you will see a graphical representation of your confusion matrix. Hover over a cell to see the Count, User Accuracy, Producer Accuracy, and FScore. The Kappa score is also displayed at the bottom of the pane. The output table will be added to the Contents pane.
Analyze the diagonal
Accuracy is represented from 0 - 1, with 1 being 100 percent accuracy. The colors range from light to dark blue, with darker meaning higher accuracy.
Unlike the diagonal, the cells that are off the diagonal show error based on omission and commission. Errors of omission show false positives, where pixels are incorrectly classified as a known class when they should have been classified as something else. An example would be where the classified image says a pixel is impervious but the ground truth says it is pervious. The impervious class has extra pixels that it should not have according to the ground truth data. Errors of commission are false negatives, where pixels of a known class are classified as something other than that class. An example would be where the classified image says a pixel is forest, but it is actually impervious. In this case, the impervious class is missing pixels according to the ground truth data. Errors of omission are also known as user's accuracy or type 1 error. Errors of commission are also known as producer's accuracy or type 2 error.
View the output confusion matrix
If you want to examine the details of the error report, you can load the report into the Contents pane and open it. It is a .dbf file located in your project, or in the output folder you specified. The confusion matrix table lists the user's accuracy (U_Accuracy column) and producer's accuracy (P_Accuracy column) for each class, as well as an overall kappa statistic index of agreement. These accuracy rates range from 0 to 1, where 1 represents 100 percent accuracy. Below is an example of a confusion matrix.
The user's accuracy column shows false positives, or errors of omission, where pixels are incorrectly classified as a known class when they should have been classified as something different. An example would be where the classified image identifies a pixel as asphalt, but the reference identifies it as forest. The asphalt class contains extra pixels that it should not have, according to the reference data. User's accuracy is also referred to as errors of commission, or type 1 errors. The data to compute this error rate is read from the rows of the table. The Total row shows the number of points that should have been identified as a given class, according to the reference data.
The producer's accuracy column shows false negatives, or errors of commission. The data to compute this error rate is read in the columns of the table. The Total column shows the number of points that were identified as a given class, according to the classified map.
Kappa statistic of agreement gives an overall assessment of the accuracy of the classification.