Lesson 8: Geospatial Data Quality, Accuracy, and Mapping Standards
Lesson 8: Geospatial Data Quality, Accuracy, and Mapping Standards szw5009Lesson 8 Introduction
Lesson 8 Introduction mjg8Introduction
Evaluating the quality and accuracy of geospatial data is one of the most important topics among geospatial data users. Geospatial data are used for diverse applications, including engineering and infrastructure positioning applications. Knowing how accurate the measurements that are derived from geospatial data can be a matter of life or death in some applications, like inaccurately locating a gas pipeline by an excavation team. In this lesson, you will be introduced to various statistical concepts that are related to determining geospatial data accuracy. You will also learn about the latest map accuracy standards designed for digital geospatial data published by the American Society of Photogrammetry and Remote Sensing (ASPRS).
Learning Objectives
At the successful completion of this lesson, you should be able to:
- understand basic statistical terms used to express product accuracy.
- understand errors in geospatial data.
- understand different types of accuracy.
- differentiate between different errors in geospatial data.
- describe factors affecting geospatial products accuracy.
- practice accuracy computations.
- understand The ASPRS positional accuracy standards.
Lesson Readings
Google Drive (Open Access)
- ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2, version 2 (2024)
- ASPRS Highlight Article “Best Practices in Evaluating Geospatial Mapping Accuracy according to the New Mapping Accuracy according to the New ASPRS Accuracy Standards”
- ASPRS Highlight Article “Overview of the ASPRS Positional Accuracy Standards for Digital Geospatial Data EDITION 2, VERSION 2 (2024)”
Lesson Activities
- Study lesson 8 materials on CANVAS/Drupal and the textbook chapters assigned to the lesson
- Complete quiz 8
- Submit your COA Application
- Complete your discussions for the assignment on "FAA Roadmap."
- Complete your discussions for the assignment on "Differences Between Rules and Regulations."
- Attend the weekly call on Thursday evening at 8:00 pm ET
- Practice computing product accuracy for each of the three data processing exercises.
Geospatial Data Accuracy and Quality and Mapping Standards
Geospatial Data Accuracy and Quality and Mapping Standards mjg8Introduction:
Evaluating the quality and accuracy of geospatial data is one of the most important topics among geospatial data users. Geospatial data are used for diverse applications, including engineering and positioning applications. Knowing how accurate the measurements are that are derived from geospatial data is a matter of life or death in some applications, like when locating gas pipelines. In this section, you will be introduced to various statistical concepts that are related to determining geospatial data accuracy. You will also learn about the latest map accuracy standards designed for digital geospatial data published by the American Society of Photogrammetry and Remote Sensing (ASPRS).
Metrics in Geospatial Production Process:
For any geospatial data product, collecting metrics about a dataset revolves around the following questions:
- How well does the map fit a national or a global coordinates system and datum?
- How well does the geometric and radiometric quality meet or depart from the client’s expectations or specifications?
- How well do these metrics fit a “standard” or what is considered standard within the geospatial industry?
Why we are concerned about accuracy?
Errors exist in any product we produce, no matter how accurate the instrument or the process we utilize. This is because all measuring instruments are not perfect, including laser instruments. Figure 1 illustrates the common instruments used in surveying and mapping practices and which we may think are perfect measurement devices.

Errors in Measurements
Errors in Measurements mjg8There are two types of errors that concern us the most in geospatial data generation, and those are random error and systematic error. The third type, which is what we call blunders, is not considered an error, but we need to understand it and deal with it appropriately.
Random Error (or accidental error) is the type of error that randomly happens in nature due to our, or the instrument’s, incapability in realizing the true value. The true value in any measurement process is elusive to us and is beyond our metaphysical power. In a measuring process, we are only estimating the true value. Random error can be reduced by training, experience, and improved quality, but it cannot be eliminated.
Systematic Error: Is the error that has a repeated constant value and follows a mathematical logic. It can be reduced through calibration.
Blunders: A blunder is not an error; it is a mistake resulting from carelessness or negligence that resembles error. Common causes of blunders in surveying and mapping are:
- Measurement taken incorrectly
- Values misread from the measuring device (i.e. screen)
- Number transposed as they are recorded (696 vs. 969)
- Miscounting grids ticks
- Handwriting that is hard to read
- Values entered incorrectly into the computer
- Using the wrong datum and/or coordinate system
- Using the wrong units (meter versus US or international foot)
- Rounding numbers in recording the data
Facts on Error and Normal Distribution:
- Errors are unavoidable, but controllable;
- Any mapping process will have some variation of errors built in;
- No combination of machine and human can produce a product that is exactly the same each time;
- Biases should be removed prior to analysis;
- Small errors are more common than large errors;
- Errors are just as likely to be positive as negative;
- Large errors seldom occur and can only be so big. Blunders can be large.
Accuracy Defined
Accuracy Defined mjg8Accuracy: The closeness of results of observations, computations, or estimates of graphic map features to their true value or position on the ground.
Precision (Repeatability): The closeness with which measurements agree with each other.
Facts about Accuracy:
- True value is the theoretically correct or exact value of a quantity. The true value is elusive to us, and it cannot be reached considering our human limitations. True value is a matter relate to metaphysics.
- Accuracy is part of the map metrics that need to be included in the metadata of any geospatial dataset.
To illustrate the concepts of accuracy and precision in a practical fashion, let us consider evaluating the results of the four shooting sessions of Figure 2 that the sharp dart shooter completed at different times. In session A, the shooter’s shots seem to be scattered around the bullseye. He/she managed to get the shots around the targeted spot, or the bullseye, but failed to land them close to each other, i.e. they are scattered apart. To evaluate such a session, we say the shooter was accurate as he/she stayed close to the bullseye, but not precise, as the shots were not close to each other. In session B, we would say the shooter managed to cluster all shots in one spot, so he/she was precise but far away from the bullseye, so he/she was not accurate. Accordingly, in session C, he/she was accurate and precise, while in session D the shooter was neither accurate nor precise. To illustrate the concept of biases in measurements, let us analyze sessions B and C. Assuming the two sessions were shot by the same shooter, it is obvious that the shooter performed perfect shots in both sessions but that his/her shots in session B were biased due to mechanical misalignment of the bow or the gun, if a gun was used. Such misalignment of the bow, the gun barrel, or the sight scope caused the shots to be systematically directed to the wrong position instead of the bullseye, causing a bias in the shots. Once proper calibration is made to these mechanical defects, the bias is then removed and all the shots will perfectly fall around the bullseye, like in session C.

To evaluate the shooter results using probability and density distribution terms, the results of session B are equivalent to the random distribution 3 of Figure 3, precise but not accurate, assuming the most probable value of the bullseye is represented by p on the x-axis. The results of session A, however, resemble the distribution 2 of Figure 3, accurate but not precise. For more information on the subject, please watch this NGS video.

The Ever-confusing Statistical Terms
The Ever-confusing Statistical Terms mjg8To illustrate the different statistical terms we usually run into when we discuss data accuracy, let us consider the five error values (3-in., 2-in, 1-in., 5-in., and 4-in.) that were calculated on a population of data.
- Mean (average)
- Range = the distance between the largest error and the smallest error, i.e. Smallest = 1, Largest = 5 3)
Variance = measure of spread or dispersion around the mean. It is the mean square of all the errors
Here, inch2 is a meaningless unit and a better statistical term to use is the standard deviation.
- Standard deviation, also called one-sigma, is the square root of the variance
- Root Mean Square Error (RMSE)
- RMSE is not Standard deviation or sigma; they are different.
Root Mean Square Error (RMSE) is computed as follows:
Where,
Z = Measured Value from the data
Zi = Control Value (field surveyed)
n = number of measurements
Relationship Between Standard Deviation and Root Mean Square Error (RMSE)
Relationship Between Standard Deviation and Root Mean Square Error (RMSE) mjg8Facts about RMSE:
- Includes random and systematic errors
- More useful to use as it reveals biases (systematic error)
- It tells us how accurate the data is
Facts about Standard Deviation:
- Includes only random error
- Reflects only how precise the data is
- It does not tell us how accurate the data is in the presence of biases. It only tells us how precise the data is.
Table 1 illustrates the difference between standard deviation and the RMSE in revealing the presence of biases in measurements. The table represents a vertical accuracy evaluation for points cloud derived from UAS imagery by comparing it to a higher accuracy elevation model derived from a mobile lidar mapping system. The UAS-derived elevation model needed to meet 5 cm (0.164 ft) accuracy. If we used standard deviation alone, the data would meet the specifications with a value of 0.076 ft. However, looking at the high value of 0.246 ft. (7.5 cm) of the mean, it is obvious this data set contains a bias, and the only way to catch it is by either evaluating the value of the mean or using the RMSE as the accuracy measure. The high value of the RMSE = 0.257 ft. (7.83 cm) will flag the data as not meeting specifications. The far right column contains the error values after removing the bias of 0.246 ft. (7.5 cm) from the measurements. Once we remove the bias, the values for the RMSE and the standard deviation are equal and they both meet the project accuracy specifications. Removing a bias from elevation data could be as simple as shifting the entire dataset up or down by the magnitude of the bias itself, such practice is called z-pump.

Table 1 Vertical Accuracy Tabulation of Geospatial Product
Normal Distribution Curve
Normal Distribution Curve mjg8In randomly distributed repeated measurements, measurements values will vary around the mean or the average, with most values being closer to the average. Deviation from such behavior indicates the presence of bias(es) or perhaps blunders in the measurements. Figure 4 shows a true random distribution of a set of measurements that do not contain biases. For the measurement’s distribution in Figure 4, we notice that 68.2% of the measured values fall within +/- 1 RMSE or +/- 1 sigma from the mean value, that is 34.1% on both sides of the mean. We also notice that 95% of the measurements fall within +/- 2 RMSE or +/- 2 sigma from the mean. Understanding such distribution is essential to understanding the map accuracy standard we are going to discuss in the following sections.
Common Error Estimation Terms
Common Error Estimation Terms mjg8Table 2 lists the most common terms used to estimate errors in surveying and mapping. Probable error is the term used to describe the probability, or the confidence level, that 50% of the errors fall within, while 95% errors represents the confidence level that 95% of the measured error values fall under.
| Error | % Error | Constant wrt σ |
|---|---|---|
| Probable Error | 50 | 0.6745 σ |
| Standard Error | 68.27 | 1.000 σ |
| 90% Error | 90 | 1.6449 σ |
| 95% Error | 95 | 1.9599 σ |
| 3σ Error | 99.73 | 3.0000 σ |
The different confidence levels (50% to 99.73% or 3 sigma) listed in Table 2 can be used to express the same accuracy level. For example, accuracy expressed via RMSE and at the 95% confidence level essentially reflects the same accuracy, differing only in their statistical confidence assignments.
To clarify these distinctions, consider the following example: In Figure 5, colored balls symbolize errors identified during an accuracy assessment using independent check points. Ball diameters indicate varying error magnitudes for each check point, while the funnel’s spout diameter corresponds to the maximum allowable error for each statistical metric—50%, 90%, 95%, and 97.73% confidence levels. For instance, Funnel D’s larger spout accommodates the greatest error, representing the 97.73% confidence level.
If users unfamiliar with these statistical terms are presented with various accuracy figures, they would likely select the smallest value—in this case, 6.74 cm associated with the 50% confidence level—as it suggests tighter accuracy. Conversely, producers might prefer the larger value of 30 cm at the 97.73% confidence level, anticipating greater flexibility. However, both selections are based on a misunderstanding: both values reflect the same underlying accuracy, differentiated solely by the proportion of checkpoints meeting that threshold. Specifically, for the 6.74 cm figure at the 50% confidence level, only half of the check points must meet this criterion, whereas at 30 cm and the 97.73% confidence level, nearly all must comply.
This nuanced distinction often leads to confusion among end users, prompting the decision to remove the 95% confidence level and rely exclusively on RMSE in the latest version of the accuracy standards of the American Society of Photogrammetry and Remote Sensing (ASPRS), which provides a clearer and more consistent metric for accuracy.

Positional Errors and Accuracy
Positional Errors and Accuracy mjg8According to the ASPRS Positional Accuracy Standards for Digital Geospatial Data, the terms positional error and absolute and relative accuracy are defined as follow:
- Positional error – The difference between data set coordinate values and coordinate values from an independent source of higher accuracy for identical points.
- Positional accuracy – The accuracy of the position of features, including horizontal and vertical positions, with respect to a horizontal and vertical datum.
- Relative accuracy – A measure of variation in point-to-point accuracy in a data set.
- Relative accuracy – Characterizes the internal geometric quality of an elevation data set without regard to surveyed ground control.
The New ASPRS Positional Accuracy Standards for Digital Geospatial Data
The New ASPRS Positional Accuracy Standards for Digital Geospatial Data mjg8In November of 2014, the American Society of Photogrammetry and Remote Sensing (ASPRS) published Edition 1 of the first ever mapping accuracy standards that are solely designed for today's digital geospatial data. Edition 2, v1 was published on August 23, 2023 followed by version 2 on June 24, 2024 to correct some measures to suite today's technologies and processes and adding six addenda on best practices and guidelines. As of today, the final official version of the ASPRS accuracy standards is edition 2, version 2 (2024).
Motivation Behind the New Standard is:
- Legacy map accuracy standards, such as the ASPRS 1990 standard and the National Map Accuracy Standards (NMAS) of 1947, are outdated (over 30 years since ASPRS 1990 was written).
- Many of the data acquisition and mapping technologies that these standards were based on are no longer used.
- More recent advances in mapping technologies can now produce better quality and higher accuracy geospatial products and maps.
- Legacy map accuracy standards were designed to deal with plotted or drawn maps as the only medium to represent geospatial data.
- Within the past two decades (during the transition period between the hardcopy and softcopy mapping environments), most standard measures for relating GSD and map scale to the final mapping accuracy were inherited from photogrammetric practices using scanned film.
- New mapping processes and methodologies have become much more sophisticated with advances in technology and advances in our knowledge of mapping processes and mathematical modeling.
- Mapping accuracy can no longer be associated with camera geometry and flying altitude alone (focal length, xp, yp, B/H ratio, etc.).
- New map accuracy is influenced by many factors such as:
- the quality of camera calibration parameters;
- quality and size of a Charged Coupled Device (CCD) used in the digital camera CCD array;
- amount of imagery overlaps;
- quality of parallax determination or photo measurements;
- quality of the GPS signal;
- quality and density of ground controls;
- quality of the aerial triangulation solution;
- capability of the processing software to handle GPS drift and shift;
- capability of the processing software to handle camera self-calibration,
- the digital terrain model used for the production of orthoimagery.
These factors can vary widely from project to project, depending on the sensor used and the specific methodology. For these reasons, existing accuracy measures based on map scale, film scale, GSD, c-factor and scanning resolution no longer apply to current geospatial mapping practices.
- Elevation products from the new technologies and active sensors such as lidar, UAS, and IFSAR are not considered by the legacy mapping standards. New accuracy standards are needed to address elevation products derived from these technologies.
The New Standard Highlights
- Sensor agnostic, data driven: Positional Accuracy Thresholds which are independent of published GSD, map scale or contour interval
- It is All Metric!
- Unlimited Horizontal and vertical Accuracy Classes:
- Added additional Accuracy Measures
- Aerial triangulation accuracy,
- Ground controls accuracy,
- Orthoimagery seam lines accuracy,
- Lidar relative swath-to-swath accuracy,
- Recommended minimum Nominal Pulse Density (NPD)
- Horizontal accuracy of elevation data,
- Delineation of low confidence areas for elevation data
- Required number and spatial distribution of QA/QC check points based on project area
- Introduced a new accuracy type, the three-dimensional accuracy or 3D accuracy.
- Eliminated the use of 95% confidence level as an accuracy measure. RMSE is the only accuracy measure the new standards recognize and use.
- Factoring in the accuracy of the ground control and checkpoints survey when computing products accuracy.
- Added six addenda on best practices and guidelines for:
- General Best Practices and Guidelinesli>
- Field Surveying of Ground Control and Checkpoints
- Mapping with Photogrammetryli>
- Mapping with Lidarli>
- Mapping with UAS
- Mapping with Oblique Imageryli>
Advantage of Specifying the New ASPRS Positional Accuracy Standards for Digital Geospatial Data for a Project
Users of the new standards do not have to specify accuracy details for the intermediate processes in product generation. The user needs to specify the final deliverable product accuracy and the new standards will set up all accuracy specifications for intermediate processes, such as ground survey, aerial triangulation, etc., involved in the production of the final product. Figure 6 illustrates such a concept.

Horizontal Accuracy Standards for Geospatial Data
Horizontal Accuracy Standards for Geospatial Data mjg8Some of the highlights of the new ASPRS Horizontal Accuracy Standards for Geospatial Data are the following:
Unlimited horizontal accuracy classes:
The new standard was designed to fit any horizontal accuracy requirement no matter what technology, current or future, is used. Table 3 represents the new ASPRS horizontal accuracy classes.Table 3 The new ASPRS horizontal accuracy classes Horizontal Accuracy Class Absolute Accuracy
RMSEH (cm)
Orthoimagery Mosaic Seamline Mismatch (cm) # cm ≤ # ≤ 2 × # RMSEH = RMSEx2 + RMSEy2
RMSEH = Radial RMSE = Circular RMSE = Two−dimensional RMSE of X & Y
- Aerial triangulation results should be twice as accurate as the generated products:
- Ortho and planimetric maps ONLY:
- RMSEH(AT) = ½ × RMSEH(Map)
and - RMSEV(AT) = RMSEH(Map)
- RMSEH(AT) = ½ × RMSEH(Map)
- For ortho, planimetric maps, and elevation maps:
- RMSEH(AT) = ½ × RMSEH(Map)
and - RMSEV(AT) = RMSEH(DEM)
- RMSEH(AT) = ½ × RMSEH(Map)
- Ortho and planimetric maps ONLY:
- Control points for aerial triangulation should be twice as accurate as the generated product:
- For ortho and planimetric maps ONLY:
- RMSEH(GCP) = ½ × RMSEH(Map)
and - RMSEV(GCP) = RMSEH(Map)
- RMSEH(GCP) = ½ × RMSEH(Map)
- For ortho/planimetric maps and elevation maps:
- RMSEH(GCP) = ½ × RMSEH(Map)
and - RMSEV(GCP) = ½ × RMSEV(DEM)
- RMSEH(GCP) = ½ × RMSEH(Map)
- For ortho and planimetric maps ONLY:
Table 4 lists common horizontal accuracy classes for geospatial mapping products.
| Horizontal Accuracy Class RMSEx and RMSEy (cm) | RMSEr (cm) | Orthoimage Mosaic Seamline Maximum Mismatch (cm) |
|---|---|---|
| 0.63 | 0.9 | 1.3 |
| 1.25 | 1.8 | 2.5 |
| 2.50 | 3.5 | 5.0 |
| 5.00 | 7.1 | 10.0 |
| 7.50 | 10.6 | 15.0 |
| 10.00 | 14.1 | 20.0 |
| 12.50 | 17.7 | 25.0 |
| 15.00 | 21.2 | 30.0 |
| 17.50 | 24.7 | 35.0 |
| 20.00 | 28.3 | 40.0 |
| 22.50 | 31.8 | 45.0 |
| 25.00 | 35.4 | 50.0 |
| 27.50 | 38.9 | 55.0 |
| 30.00 | 42.4 | 60.0 |
| 45.00 | 63.6 | 90.0 |
| 60.00 | 84.9 | 120.0 |
| 75.00 | 106.1 | 150.0 |
| 100.00 | 141.4 | 200.0 |
| 150.00 | 212.1 | 300.0 |
| 200.00 | 282.8 | 400.00 |
| 250.00 | 353.6 | 500.0 |
| 300.00 | 424.3 | 600.0 |
| 500.00 | 707.1 | 1000.0 |
| 1000.00 | 1414.2 | 2000.0 |
Vertical Accuracy Standards for Geospatial Data
Vertical Accuracy Standards for Geospatial Data mjg8Some of the highlights of the new ASPRS Vertical Accuracy Standards for Geospatial Data are the following:
5. Unlimited vertical accuracy classes:
The new standard was designed to fit any vertical accuracy requirement, no matter what technology, current or future, is used. Table 5 represents the new ASPRS vertical accuracy classes.
| Vertical Accuracy Class | Absolute Accuracy | Data Internal Precision (where applicable) | |||
|---|---|---|---|---|---|
| NVA RMSEv (cm) | VVA RMSEv (cm) | Within-Swath Smooth Surface Precision Max Diff (cm) | Swath-to-Swath Non-Vegetated RMSDz (cm) | Swath-to-Swath Non-Vegetated Max Diff (cm) | |
| #-cm | ≤ # | As found | ≤ 0.60*# | ≤ 0.80*# | ≤ 1.60*# |
- Non-vegetated Vertical Accuracy (NVA) for any part of the project that is not covered by vegetation.
- Vegetated Vertical Accuracy (VVA) for the part of the project that is partly or fully covered by vegetation.
6. The standards introduced relative accuracy for elevation data, besides the absolute accuracy.
Table 6 lists a new accuracy term, which is the relative accuracy. It mainly addresses the Lidar-derived elevation data. The table also provides vertical accuracy examples and other quality criteria for ten common vertical accuracy classes.
| Vertical Accuracy Class | Absolute Accuracy | Data Internal Precision (where applicable) | |||
|---|---|---|---|---|---|
| NVA RMSEv (cm) | VVA RMSEv (cm) | Within-Swath Smooth Surface Precision Max Diff (cm) | Swath-to-Swath Non-Vegetated RMSDz (cm) | Swath-to-Swath Non-Vegetated Max Diff (cm) | |
| 1-cm | ≤ 1.0 | As found | ≤ 0.6 | ≤ 0.8 | ≤ 1.6 |
| 2.5-cm | ≤ 2.5 | As found | ≤ 1.5 | ≤ 2.0 | ≤ 4.0 |
| 5-cm | ≤ 5.0 | As found | ≤ 3.0 | ≤ 4.0 | ≤ 8.0 |
| 10-cm | ≤ 10.0 | As found | ≤ 6.0 | ≤ 8.0 | ≤ 16.0 |
| 15-cm | ≤ 15.0 | As found | ≤ 9.0 | ≤ 12.0 | ≤ 24.0 |
| 20-cm | ≤ 20.0 | As found | ≤ 12.0 | ≤ 16.0 | ≤ 32.0 |
| 33.3-cm | ≤ 33.3 | As found | ≤ 20.0 | ≤ 26.7 | ≤ 53.3 |
| 66.7-cm | ≤ 66.7 | As found | ≤ 40.0 | ≤ 53.3 | ≤ 106.7 |
| 100-cm | ≤ 100.0 | As found | ≤ 60.0 | ≤ 80.0 | ≤ 160.0 |
| 333.3-cm | ≤ 333.3 | As found | ≤ 200.0 | ≤ 266.7 | ≤ 533.3 |
7. The standards introduced horizontal accuracy estimation for elevation data
- For Photogrammetric elevation data, the horizontal accuracy equates to the horizontal accuracy class that would apply to planimetric data or digital orthoimagery produced from the same source imagery, using the same aerial triangulation/INS solution.
- For Lidar elevation data: use the following formula:
Table 7 lists some horizontal accuracy values for lidar data based on the previous formula (the GNSS horizontal accuracy is assumed to be equal to 0.10 m, the IMU error is assumed to be 10.0 arc-seconds for the roll and pitch and 15.0 arc-seconds for the heading)
| Flying Height (m) | GNSS Error (cm) | IMU Roll/Pitch Error (arc-sec) | IMU Heading Error (arc-sec) | RMSEH (cm) |
|---|---|---|---|---|
| 500 | 10 | 10 | 15 | 10.7 |
| 1,000 | 10 | 10 | 15 | 12.9 |
| 1,500 | 10 | 10 | 15 | 15.8 |
| 2,000 | 10 | 10 | 15 | 19.2 |
| 2,500 | 10 | 10 | 15 | 22.8 |
| 3,000 | 10 | 10 | 15 | 26.5 |
| 3,500 | 10 | 10 | 15 | 30.4 |
| 4,000 | 10 | 10 | 15 | 34.3 |
| 4,500 | 10 | 10 | 15 | 38.2 |
| 5,000 | 10 | 10 | 15 | 42.0 |
8. The Standards Introduced a Formal Accuracy Testing Statement:
For the first time, the new standards provide users with formal data evaluation statements to be used by the data users and data producers. The following statements are examples of the accuracy statement of an elevation dataset:
8.1 Accuracy Reporting by Data User or Consultant
This type of reporting should only be based on a set of independent checkpoints. The positional accuracy of digital orthoimagery, planimetric data, and elevation data products shall be reported in the metadata in one of the manners listed below. For projects with NVA and VVA requirements, two three-dimensional positional accuracy values should be reported based on the use of NVA and VVA, respectively.
8.1.1 Accuracy Testing Meets ASPRS Standard Requirements
If testing is performed using a minimum of thirty (30) checkpoints, accuracy assessment results should be reported in the form of the following statements:
Reporting Horizontal Positional Accuracy
“This data set was tested to meet ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2 (2023) for a __(cm) RMSEH horizontal positional accuracy class. The tested horizontal positional accuracy was found to be RMSEH = __(cm)”.
Reporting Vertical Positional Accuracy
“This data set was tested to meet ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2 (2023) for a __(cm) RMSEV Vertical Accuracy Class. NVA accuracy was found to be RMSEV = __(cm).” VVA accuracy was found to be RMSEV = __(cm).”
Reporting Three-Dimensional Positional Accuracy
“This data set was tested to meet ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2 (2023) for a ___ (cm) RMSE3D three-dimensional positional accuracy class. The tested three-dimensional accuracy was found to be RMSE3D = ___(cm).”
8.1.2 Accuracy Testing Does Not Meet ASPRS Standard Requirements
If testing is performed using fewer than thirty (30) checkpoints, accuracy assessment results should be reported in the form of the following statements:
Reporting Horizontal Positional Accuracy
“This data set was tested as required by ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2 (2023). Although the Standards call for a minimum of thirty (30) checkpoints, this test was performed using ONLY __ checkpoints. This data set was produced to meet a ___(cm) RMSEH horizontal positional accuracy class. The tested horizontal positional accuracy was found to be RMSEH = ___(cm) using the reduced number of checkpoints.”
Reporting Vertical Positional Accuracy
“This data set was tested as required by ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2 (2023). Although the Standards call for a minimum of thirty (30) checkpoints, this test was performed using ONLY __ checkpoints. This data set was produced to meet a ___(cm) RMSEV vertical positional accuracy class. The tested vertical positional accuracy was found to be RMSEV = ___(cm) using the reduced number of checkpoints.”
Reporting Three-Dimensional Positional Accuracy
“This data set was tested as required by ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2 (2023). Although the Standards call for a minimum of thirty (30) checkpoints, this test was performed using ONLY __ checkpoints. This data set was produced to meet a ___(cm) RMSE3D three-dimensional positional accuracy class. The tested three-dimensional positional accuracy was found to be RMSE3D = ___(cm) using the reduced number of checkpoints.”
8.2 Accuracy Reporting by Data Producer
In most cases, data producers do not have access to independent checkpoints to assess product accuracy. If rigorous testing is not performed by the data producer due to the absence of independent checkpoints, accuracy statements should specify that the data was “produced to meet” a stated accuracy. This “produced to meet’’ statement is equivalent to the “compiled to meet” statement used by prior Standards when referring to cartographic maps. The “produced to meet’’ statement is appropriate for data producers who employ mature technologies and who follow best practices and guidelines through established and documented procedures during project design, data processing, and quality control. However, if enough independent checkpoints are available to the data producer to assess product accuracy, it will do no harm to report the accuracy using the statement provided in section 4.1 above.
If not enough checkpoints are available, but the data producer has demonstrated that they are able to produce repeatable, reliable results and thus able to guarantee the produced-to-meet accuracy, they may report product accuracy in the form of the following statements:
Reporting Horizontal Positional Accuracy
“This data set was produced to meet ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2 (2023) for a __(cm) RMSEH horizontal positional accuracy class.
Reporting Vertical Positional Accuracy
“This data set was produced to meet ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2 (2023) for a __(cm) RMSEV vertical accuracy class.
Reporting Three-Dimensional Positional Accuracy
“This data set was produced to meet ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2 (2023) for a ___ (cm) RMSE3D three-dimensional positional accuracy class
9. The Standards introduced a new accuracy term, the Three-Dimensional Positional Accuracy:
The following formula defines the three-dimensional accuracy standard for any three-dimensional digital data as a combination of horizontal and vertical radial error. RMSE3D is derived from the horizontal and vertical components of error according to the following formula:
10. The Standards introduced a new approach for assessing product accuracy by factoring in the accuracy of the surveyed checkpoints when computing product accuracy:
As we are producing more accurate products, errors in surveying techniques of the checkpoints used to assess product accuracy, although they are small, can no longer be neglected, and they should be represented in computing the product accuracy. Currently, we quantify product accuracy, ignoring the errors in the surveyed checkpoints. In such practice, our surveying techniques approximate the datum, i.e., producing pseudo datum, and therefore, we are evaluating the closeness of data to the pseudo datum and not the true datum. The following figure illustrates the current practices and the new ones proposed in Edition 2 of the ASPRS standards.

Currently, we model error as follows:
The proposed method:
Best Practices in Determining Product Accuracy*
- Check data should not be used in calibrating the tested products:
- Totally independent checkpoints
- Check data must be more accurate than the tested data:
- Two times more accurate
- Check data must be well distributed around the project area:
- Check data must be a valid statistical sample:
- Minimum of 30 checkpoints for orthos
- Minimum of 30 checkpoints for elevation data
* according to the ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2, v2 (2024)
The new ASPRs Standards and number of check points
The new ASPRs Standards and number of check points mjg8The new standards provide Table 8 for the recommended number of check points required for validating product accuracies. A minimum of 30 check points should be used to assess vertical or horizontal accuracy for a product. For project areas that are larger than 10,000 square kilometers, use only 120 checkpoints.
| Project Area (Square Kilometers) | Total Number of Checkpoints for NVA |
|---|---|
| ≤10005 | 30 |
| 1001-2000 | 40 |
| 2001-3000 | 50 |
| 3001-4000 | 60 |
| 4001-5000 | 70 |
| 5001-6000 | 80 |
| 6001-7000 | 90 |
| 7001-8000 | 100 |
| 8001-9000 | 110 |
| 9001-10000 | 120 |
| >10000 | 120 |
Elevation Data Quality Versus Positional Accuracy
Elevation Data Quality Versus Positional Accuracy qaa3When modeling terrain with lidar, it is important to be aware of the difference between elevation data quality and positional accuracy. In many instances, users of lidar data focus solely on point cloud accuracy as specified by sensor manufacturers, but an accurate lidar point cloud does not necessarily result in accurate modeling of the terrain, nor will it create accurate volumetric calculations: elevation data must also faithfully represent the terrain detail. Therefore, users should also consider point density as it relates to terrain roughness or smoothness, as this is an equally important aspect of accurate terrain modeling.
Terrain modeling methodologies (e.g., polygon-based Regular Triangulated Networks (RTNs) versus Triangulated Irregular Networks (TINs) versus Voxel-Based Networks) also affect the terrain model quality. Terrain analysis is sensitive to whether the software represents the point cloud as a TIN, a gridded surface, or an RTN. Methods that involve gridding the data are sensitive to grid cell size (post spacing). Note that lidar point density is an important factor when choosing grid cell size.
The Figure below illustrates the relationship between terrain roughness and point density. While the point cloud in this example may have a vertical accuracy of RMSEV = 10-cm, TIN interpolation based on surrounding areas of low point density places the vertical position of point A at point A’, resulting in a vertical error of 2 meters in this example. The remedy is to obtain the point cloud at a higher density so that it more accurately represents the terrain detail. Attempting to use a low-density point cloud to represent terrain with high frequencies of undulation will result in inaccurate volume estimations, regardless of what software or modeling algorithms are used. Smoother terrain may be adequately represented with a lower density point cloud. Very smooth or flat terrain can be accurately modeled using a point cloud with nominal post spacing (NPS) of a few meters or coarser.
The Nyquist-Shannon sampling theorem, which is well-known and widely used in signal processing, may be used to determine the point density required to accurately represent the project terrain. According to the Nyquist-Shannon sampling theorem, if a signal x(t) contains no frequencies higher than B Hz, then a sampling rate of greater than 2B samples per second (or 2B Hz) will be needed in order to reconstruct the original signal without aliasing.
For example, let us assume that the undulation rate of the terrain represents the highest frequency of the signal to be modeled, and the nominal point spacing represents the sampling rate needed to model the terrain without aliasing. If we want to accurately model rocky terrain where the spikes caused by these rocks appear every 30 cm on average, the nominal point spacing of the lidar data used to model this terrain should be less than 15 cm.
Summary and Final Tasks
Summary and Final Tasks szw5009Summary
Congratulations! You have just completed Lesson 8. You may have noticed from the different sections of the lessons that the UAS market is growing rapidly. There are quite a few manufacturers for the civilian UAS, as well as software and sensor producers. User requirements will drive the selection process for the UAS and the processing software that is right for the job. Required UAS endurance, range and payload capacity will be different from one application to another. However, most applications will prefer more endurance, longer range, and heavier payload if the price is right.
In this lesson, you also learned about the value of evaluating data quality and accuracy and how to use the new ASPRS standards to report such quality and accuracy factors.
By now, you must be finishing the products generation of ortho photo and digital elevation model using Pix4D and the sample imagery. Samples of the products need to be submitted with your project report and presented next week during your presentation.
Final Tasks
| 1 | Study Lesson 8 materials and the text books chapters assigned to the lesson |
|---|---|
| 2 | Complete Lesson 8 Quiz |
| 3 | Submit your COA Application |
| 4 | Complete your discussions for the assignment on "FAA Road map" |
| 5 | Complete your discussions for the assignment on "Differences Between Rules and Regulations" |
| 6 | Attend the weekly call on Thursday evening at 8:00pm ET |