Digital Image Classification for Land Use Land Cover (LULC) Assessment - Part 2
Digital Image Classification for Land Use Land Cover (LULC) Assessment - Part 213. Land Use vs Land Cover
Land cover refers to the observable physical elements present on the Earth's surface, such as vegetation, water bodies, bare soil, and built structures. In contrast, land use pertains to how humans utilize these areas—for example, agriculture, urban development, recreation, or conservation. While land cover can typically be identified directly from satellite or aerial imagery based on spectral and spatial patterns, determining land use often requires additional contextual information. This may involve incorporating supporting GIS datasets, such as zoning maps, infrastructure records, or land management plans, to accurately interpret the intent and function of each area. Because land use is tied to human activities and decisions, it cannot always be reliably inferred from surface appearance alone, necessitating the integration of ancillary data for comprehensive analysis.
14. Classification Schemes
Classification schemes establish organized, multi-level frameworks for systematically organizing and labeling types of land cover. Examples include the USGS Land Use and Land Cover (LULC) system, the International Geosphere-Biosphere Programme (IGBP) classification, and the National Wetlands Inventory. These schemes ensure consistency and comparability of land cover categories across different projects and regions by providing clear definitions and hierarchical groupings for each class.
Example on Land Use and Land Cover Classification using Supervised Classification
Example: Waterfowl management unit:
Given: Two cover types: cattail (CT) marsh, smartweed (SW) moist soil, Single band
Find: Use a maximum likelihood classifier to classify the following hypothetical image:

Formula for normal distribution (Likelihood Values):

Where,
μ= Mean, σ = standard deviation x = spectral value
Solution:
1. Calculate spectral values from cattail and smartweed training fields

2. Compute the likelihood values for cattail using the normal distribution formula
| Spectral Value | Likelihood |
|---|---|
| 10 | 0.00003 |
| 15 | 0.0009 |
| 20 | 0.011 |
| 22 | 0.022 |
| 24 | 0.039 |
| 26 | 0.074 |
| 30 | 0.080 |
| 32 | 0.074 |
| 34 | 0.058 |
| 36 | 0.039 |
| 38 | 0.022 |
| 40 | 0.011 |
| 45 | 0.009 |
| 50 | 0.00003 |
3. Compute the same values for smartweed
4. Plot the two values to create likelihood curves

5. Assign each candidate pixel to a cover class that had the highest likelihood
Example:
A pixel value 20 would have a smartweed likelihood value of 0.080, a cattail likelihood value of 0.011
Decision: the pixel would be classified as smartweed

15. Accuracy Assessment
Accuracy assessment involves systematically comparing the classified map to trusted reference data to determine how well the classification process has performed. This evaluation uses several key metrics: overall accuracy, which measures the percentage of correctly classified pixels across the entire map; producer’s accuracy, which indicates the likelihood that a reference site is correctly mapped (and highlights omission errors, where features are missed); and user’s accuracy, which reflects the probability that a pixel labeled as a particular class actually represents that class on the ground (addressing commission errors, where features are incorrectly included in a class). By analyzing these accuracy measures, one can better understand both the strengths and limitations of the classification results and identify areas needing improvement.
15.1 Error (Confusion) Matrix
An error matrix, also known as a confusion matrix, systematically matches each pixel’s assigned classification with its actual ground truth category. By doing so, it reveals not only the frequency and types of misclassification errors but also highlights which land cover classes are most commonly mistaken for one another. This comprehensive comparison provides a statistical basis for evaluating the reliability of the entire map, pinpointing specific weaknesses and helping guide improvements in future classification efforts.
15.2 Error (Confusion) Matrix
Before we dive into the sample confusion matrix, we need to understand the main statistical terms involved in the process:
Overall Accuracy: Proportion of all correctly classified samples out of the total samples. Overall Accuracy tells you how well your classifier performed across all classes.
Producer’s Accuracy (Recall): For a given class, the proportion of actual samples correctly classified. Reflects omission errors. Producer’s Accuracy shows how well each class was detected (sensitivity).
User’s Accuracy (Precision): For a given class, the proportion of predicted samples that are actually correct. Reflects commission errors. User’s Accuracy shows how reliable each class label is (precision).
Omission Errors: Occur when an item that truly belongs to a class is left out by the classifier. High omission error indicates many missed true instances (false negatives). If omission error for Buildings is 0.08 (8%), it means 8% of all true buildings were missed.
Commission Error: Occurs when an item is incorrectly included in a class. High commission error indicates many false positives. If the commission error for Buildings is 0.11 (11%), it means 11% of points labeled as buildings are not buildings.
An error
15.2 Step-by-Step Example: Classification Accuracy Calculation
15.2.1 Confusion Matrix Setup
Suppose you classified an image into three classes: Buildings, Water, and Vegetation. After comparing your predictions to ground truth, you get the following confusion matrix:
| Actual \ Predicted | Buildings | Water | Vegetation |
|---|---|---|---|
| Buildings | 92 | 3 | 5 |
| Water | 4 | 88 | 8 |
| Vegetation | 7 | 8 | 87 |
- Rows: Actual class (ground truth)
- Columns: Predicted class (by classifier)
- Diagonal cells: Correctly classified samples
- Off-diagonal cells: Misclassifications
15.2.2 Calculate Overall Accuracy (OA)
Formula:
- Correct predictions = sum of diagonal cells = 92 (Buildings) + 88 (Water) + 87 (Vegetation) = 267
- Total samples = sum of all cells = 92 + 3 + 5 + 4 + 88 + 8 + 7 + 6 + 87 = 300
Calculation:
15.2.3 Calculate Producer’s Accuracy (Recall) for Each Class
Formula:
- Buildings:
- Water:
- Vegetation:
15.2.4 Calculate User’s Accuracy (Precision) for Each Class
Formula:
- Buildings:
- Water:
- Vegetation:
15.2.5 Calculate Omission and Commission Errors
- Omission Error (for Buildings):
(8% of true buildings were missed)
- Commission Error (for Buildings):
(10.7% of points labeled as buildings are not buildings)
15.2.6 Summary Table of Metrics
Class | Producer’s Accuracy | User’s Accuracy | Omission Error | Commission Error |
| Buildings | 0.92 | 0.893 | 0.08 | 0.107 |
| Water | 0.88 | 0.907 | 0.12 | 0.093 |
| Vegetation | 0.87 | 0.87 | 0.13 | 0.13 |
Overall Accuracy = 0.89 or 89%
17.2.7 Interpretation of Results
- Overall Accuracy tells you how well your classifier performed across all classes.
- Producer’s Accuracy shows how well each class was detected (sensitivity).
- User’s Accuracy shows how reliable each class label is (precision).
- Omission/Commission Errors help identify where your classifier is missing or mislabeling classes.
For the results of our example, buildings are classified with high recall and precision, indicating few missed buildings and few false positives. Water has the highest precision, meaning most predicted water points are correct. Vegetation is slightly lower but still strong in both metrics.
15.3 Example 2 on Accuracy Calculation
You assessed classification accuracy and tabulated your value in the following matrix:

From that error matrix, one can summarize the classification accuracy as:

In interpreting the above table, while the producer may claim that 94% of the time an area that was lawn on the ground was identified as such on the map, the user finds only 88% of the time the map says an area that is lawn will be shrubs on the ground.
To Read
To Do
- Submit materials for Digital Image Classification