Disponible avec une licence Spatial Analyst.
Disponible avec une licence Image Analyst.
Assess the accuracy of your classification
Accuracy Assessment uses a Reference Dataset to determine the accuracy of your classified result. The values of your reference dataset need to match the schema. Reference data can be in several different formats:
- A raster dataset that is a classified image
- A polygon feature class or a shapefile. The format of the feature class attribute table needs to match the training samples. To ensure this, you can create the reference dataset using the Training Samples Manager to read and write out the dataset.
- A point feature class or a shapefile. The format needs to match the output of the Create Accuracy Assessment Points tool. If you are using an existing file and want to convert it to the appropriate format, use the Create Accuracy Assessment Points geoprocessing tool.
Number of Random Points
The total number of random points that will be generated. The actual number may exceed but never fall below this number, depending on sampling strategy and number of classes. The default number of randomly generated points is 500.
Sampling Strategy
Specify a sampling scheme to use:
- Stratified Random—Create points that are randomly distributed within each class, where each class has a number of points proportional to its relative area. This is the default.
- Equalized Stratified Random—Create points that are randomly distributed within each class, where each class has the same number of points.
- Random—Create points that are randomly distributed throughout the image.
Understand your results
Once you run the tool, you will see a graphical representation of your confusion matrix. Hover over a cell to see the Count, User Accuracy, Producer Accuracy, and FScore. The Kappa score is also displayed at the bottom of the pane. The output table will be added to the Contents pane.
Analyze the diagonal
Accuracy is represented from 0 - 1, with 1 being 100 percent accuracy. The Producer's and User's Accuracy for all of the classes is indicated along the diagonal axis. The color along the diagonal ranges from light to dark blue, with darker blue indicating higher accuracy. By hovering the cursor over each cell, the values for each accuracy and an F score is reported.
Unlike the diagonal, the colored cells off the diagonal indicate the number of confused class values present in the confusion matrix. By hovering the cursor over the cells, you can see the confusion matrix results for each pairing of classes.
View the output confusion matrix
If you want to examine the details of the error report, you can load the report into the Contents pane and open it. It is a .dbf file located in your project, or in the output folder you specified. The confusion matrix table lists the user's accuracy (U_Accuracy column) and producer's accuracy (P_Accuracy column) for each class, as well as an overall kappa statistic index of agreement. These accuracy rates range from 0 to 1, where 1 represents 100 percent accuracy. Below is an example of a confusion matrix.
c_1 | c_2 | c_3 | Total | U_Accuracy | Kappa | |
---|---|---|---|---|---|---|
c_1 | 49 | 4 | 4 | 57 | 0.8594 | 0 |
c_2 | 2 | 40 | 2 | 44 | 0.9091 | 0 |
c_3 | 3 | 3 | 59 | 65 | 0.9077 | 0 |
Total | 54 | 47 | 65 | 166 | 0 | 0 |
P_Accuracy | 0.9074 | 0.8511 | 0.9077 | 0 | 0.8916 | 0 |
Kappa | 0 | 0 | 0 | 0 | 0 | 0.8357 |
The user's accuracy column shows false positives, or errors of commission, where pixels are incorrectly classified as a known class when they should have been classified as something different. User's accuracy is also referred to as Type 1 error. The data to compute this error rate is read from the rows of the table. The user’s accuracy is calculated by dividing the total number of classified points that agree with the reference data by the total number of classified points for that class. The Total row shows the number of points that should have been identified as a given class, according to the reference data. An example would be where the classified image identifies a pixel as asphalt, but the reference data identifies it as forest. The asphalt class contains extra pixels that it should not have, according to the reference data.
The producer's accuracy column shows false negatives, or errors of omission. The producer's accuracy indicates how accurately the classification results meet the expectation of the creator. Producer’s accuracy is also referred to as Type 2 error. The data to compute this error rate is read in the columns of the table. The producer’s accuracy is calculated by dividing the total number of classified points that agree with reference data by the total number of reference points for that class. These values are false-negative values within the classified results. The Total column shows the number of points that were identified as a given class, according to the classified map. An example would be where the reference data identifies a pixel as asphalt, but the classified image identifies it as forest. The asphalt class does not contain enough pixels according to the reference data.
Kappa statistic of agreement gives an overall assessment of the accuracy of the classification.
Rubriques connexes
Vous avez un commentaire à formuler concernant cette rubrique ?