Lesson 5: Geospatial Mapping and Maps Production
Lesson 5: Geospatial Mapping and Maps Production sxr133Lesson 5 Introduction
Lesson 5 Introduction mjg8Welcome to Lesson 5! In this lesson, you will understand and be familiar with the photogrammetric process, the processing systems, and data generation from an image-based UAS. Most applications of the UAS today include one form or another of a camera system (video or still camera) from which different interpretations and therefore different applications are evolved. You will also develop understanding of processes such as aerial triangulation and ortho rectification, which are the backbone of any image processing facility. The photogrammetric textbook Elements of Photogrammetry with Applications in GIS will be your companion, beside the lesson notes, in understanding the topic.
Lesson Objectives
At the successful completion of this lesson, you should be able to:
- understand the concept of sensor and product geolocation;
- understand the concept of direct geo-referencing;
- understand the concept of aerial triangulation;
- outline complete UAS data processing workflow;
- distinguish between different products obtainable from different UAS payload sensors.
Lesson Readings
Course Textbooks
- Chapters 1, 11, 16, and 17 of the textbook: Elements of Photogrammetry with Applications in GIS, 4th edition
- Chapters 2 and 9 of the textbook: Fundamentals of capturing and processing drone imagery and data
- Chapter 2 of the textbook: Unmanned vehicle systems for geomatics: towards robotic mapping
Lesson Activities
- Study lesson 5 materials on CANVAS/Drupal and the textbook chapters assigned to the lesson
- Start your first post for the discussion on "Human Elements of UAS."
- Submit your "CONOP and Risk Assessment" assignment report
- Complete quiz 5
- Start UAS Data Processing Using Pix4D for Exercise 2
- Submit final project idea
- Attend the weekly call and Exercise 2 training on Thursday evening at 8:00 pm ET
The Photogrammetric Process
The Photogrammetric Process ksc17In this section, you will understand the photogrammetric process and the different steps the product goes through in order to develop an ortho photo or digital elevation model.
Figure 7.1 illustrates the different steps of processing that imagery from a UAS is subject to in order to produce a mapping product such as an ortho photo or digital elevation model.

Figure 7.1 Process flow of the photogrammetric processing
As we learned in Lesson 4, the process starts with the mission planning process. Once all the parameters and requirements are defined for the mission, a flight plan is developed and aerial imagery is acquired according to the project specifications. The resulting imagery will be reviewed to assure the expected quality. Following the image QC, the field work will be conducted to survey the necessary ground controls. The ground controls survey can be conducted either before the imagery acquisition, or after it is completed.
Once the imagery acquisition and the ground control survey are completed, work can begin on the process of aerial triangulation. Aerial triangulation, as it will be described in section 7.2, is performed to determine the position and the orientation of the camera at the moment of exposure of each image. It includes a few processing concepts, such as interior and exterior orientations, relative orientation, and absolute orientation. Aerial triangulation is achieved through processing software that is based on rigorous mathematical models based on least squares. Once the aerial triangulation is completed, the imagery is ready to go through other processing steps such as ortho rectification and digital elevation modeling.
To Read
- Chapter 1 of Elements of Photogrammetry with Applications in GIS, 4th edition.
Imagery Geo-location
Imagery Geo-location ksc17In this section, you will learn about the concept of geo-referencing imagery, which is an important concept. Without it, no further photogrammetric processing of the imagery can take place.
In order to utilize the photogrammetric mathematical model, i.e., the collinearity condition, for the production of any mapping products, the following information needs to be made available:
- The exterior orientation parameters for every image: Six parameters which represent the camera attitude or orientation represented by the three rotational angles omega, phi, and kappa, and camera position, which is represented by the three coordinates Easting, Northing, and Elevation at the moment of image exposure.
- The camera interior geometry parameters: The calibrated lens focal length, the principal point coordinates, and the lens distortion as it was discussed in lesson 6.
- The size of the CCD array: The number of pixels contained in the CCD array along the width and the height of the array.
- The physical size of the CCD: Usually provided in microns such as 14 u (1 mm is equal to 1000 um).
- Ground Controls: A ground control is a feature in the imagery with known accurately surveyed coordinates. Depending on the required accuracy of the final products, ground controls can be omitted in some situations.
In this section, we will focus on the process of determining the six exterior orientation parameters. The camera position can be measured accurately using the airborne GPS technique using a GPS antenna on board the UAS. The three camera positions can also be computed using the process of aerial triangulation, as we will discuss soon. However, there are two methods for determining the camera attitude or orientation, and those are the aerial triangulation process and the direct measurement from the IMU, as we discussed in Lesson 6.
Aerial Triangulation and Bundle Block Adjustment
Aerial triangulation is usually performed on a photogrammetric block (Figure 7.2), which consists of all the imagery acquired over the project area. Figure 7.2 illustrates a photogrammetric block of imagery consisting of three strips, each of which has multiple overlapping images. Also shown are the different types of image overlaps. The top and middle strips contain images with 60% forward lap, while the bottom strip contains imagery with 80% forward lap. You may also notice in the figure that the middle and the bottom strips are overlapping by the amount of 30%. Such overlap is called side lap.

In the last section (the photogrammetric process), we mentioned a few terms related to aerial triangulation. We will briefly describe these terms in the following sub-sections:
Relative Orientation
Relative Orientation is the process of orienting images relative to one another (i.e., it recreates the “relative” position and attitude of the images at the instants of exposure), as illustrated below. Figure 7.3 shows four images that are connected to each other in space through the aircraft/GPS trajectory but are not necessarily connected to the ground datum (i.e., they are floating in space).

Relative orientation is an important process that must be performed before we scale the imagery to the ground datum through the process of absolute orientation, which will be discussed in the next section. To form a cohesive block, all images in the block should be relatively oriented with respect to each other through the process of relative orientation.
Absolute Orientation
The process of leveling and scaling the stereo model (formed from two images) with respect to a reference plane or datum using ground control points is shown in Figure 7.4. Figure 7.4 represents the same four images as Figure 7.3, but this time the block was tied to the ground datum through the use of seven ground control points (represented by the black stars).

Without performing the absolute orientation process, the generated map would not be specifically associated with a certain location in space. Generating maps that have geo-location information such as datum and coordinates systems can only happen after the process of absolute orientation is performed following relative orientation.
Exterior Orientation
Exterior orientation of a photograph defines its position and orientation in the object space. There are six elements of exterior orientation, X, Y, and Z of the exposure station position, and the three angles that define the angular orientation: ω, φ, and κ. The six elements of exterior orientation are not known and must be computed through a process called space resection within the aerial triangulation process. Here is the definition of the three orientation angles illustrated in Figure 7.5:
- Omega (ω): Rotation about the x axis. It is equivalent to the angle Roll of the navigation system.
- Phi (φ): Rotation about the y axis. It is equivalent to the angle Pitch of the navigation system.
- Kappa (κ): Rotation about the z axis. It is equivalent to the angle Yaw of the navigation system.

Knowing the six exterior orientation parameters for an image is necessary for any photogrammetric processing aimed at creating products from such an image. Whether you perform map compilation on a stereo plotter or generate an ortho image, the six exterior orientation parameters need to be computed before you start the production process.
Space Resection
Space Resection is the process of determining ray intersection in space to conclude camera position. See Figure 7.6. The method of space resection is a purely numerical method using collinearity equations to simultaneously yield all six elements of exterior orientation (X, Y, Z , omega, phi, and kappa). Once these elements are known, a stereo plotter can measure the photo coordinates of any point in a photo (x,y) and the ground coordinates can be computed. Ortho rectification software also utilizes space resection for ortho-rectifying an image. Figure 7.6 illustrates six images. Each of them has rays from the ground entering the camera through the lens. The intersection of the rays entering the camera at point "O" represents the photo center location, which is important for the determination of the exterior orientation parameters described earlier.

Aerial triangulation
Aerial triangulation can be defined as the process of densification of a sparsely distributed horizontal and vertical control network through:
- measurements performed on overlapping aerial photographs,
- known ground control points coordinates on the ground, and
- mathematical modeling and solution.
A conventional (film based) aerial triangulation process consists of the following steps:
- preparation
- point marking (for tie points and pass points marking)
- measurement
- computation
Data Preparation: Using a stereoscope, three points are selected down the center of each photo, approximately 1” from the top and bottom and at the center. These points are also marked on every overlapping photo on which they occur. They are often called “pass points” along strips and “tie points” between strips. See Figure 7.7. Ideally, pass points are selected in flat areas of high contrast that are free of obstructions and shadows.
Figure 7.7 represents three overlapping photos that are used to extract pass points between them. Notice that the three middle points for the middle photo (a, b, c) were located and marked on the same locations in the overlapping right and left image. This process is called point marking.

Point Marking: A good point marking device is characterized with:
- precise optics for stereo viewing;
- variable zoom - 6X to 25X;
- laser beams, hot needles, mechanical or electric drills that will remove emulsion from the dispositive;
- the ability to create a very precise circular mark, typically from 40 to 80 microns in diameter.
One of the earliest commercially successful point marking devices was the P.U.G., manufactured by Wild Heerbrugg Instruments, Inc. See Figure 7.8. Over time, pass points marked on dispositive became known simply as pug points.

Point Measurement: A skilled technician with analytical stereo plotting instruments records the location of each previously marked Pass point and tie point on each photograph.
Numerical Computation of Aerial Triangulation: Here is a summary for the steps taken within the processing software:
- Processing numerical observations of individual photographs to build a cohesive block.
- Forming individual photos into strips by successive, relative orientations, using the common primary pass points between overlapping photos.
- Computing Horizontal and vertical coordinates for each strip.
- Converting strip coordinates to ground coordinates using the ground control contained within a given strip.
- Applying simultaneous polynomial equations (horizontal and vertical) to produce final adjusted values for all points.
- Calculating exterior orientation elements for each photo to be used as input to a bundle adjustment program.
Unlike the aerial triangulation of the past, which was performed using film-based imagery instead of digital imagery and optical-mechanical instruments, today aerial triangulation is performed on digital imagery using a complete softcopy approach called softcopy aerial triangulation. In softcopy aerial triangulation, all manual work of points marking and measurements are left to the automation of the software. It is more efficient and more accurate.
Mathematical Model for Aerial Triangulation
The backbone of the computational model in Photogrammetry is based on two equations called the collinearity equations, which are based on the collinearity condition. The two collinearity equations are represented below:
Where,
Xc, Xc, Xc = Camera perspective center
X, Y, Z = ground point position
x, y = point position on image
mii = photo orientation matrix
f = camera lens focal length
x0, y0 = Principal point of autocollimation
Direct Geo-referencing
In the last two decades, navigation technologies have advanced to the point that enabled manufacturers of the Inertial Navigation Systems (INS), usually used for missiles and submarines navigation, to produce an Inertial Measurement Unit (IMU) to accurately measure the orientation of airborne sensors such as cameras and LiDAR. The IMU, which we briefly described in Lessons 2 and 6, are used either to replace the process of aerial triangulation or to assist its solution. Most UAS, including the small ones, carry on board a GPS unit and an IMU unit. Unfortunately, most of these miniaturized low cost IMU that are used for UAS are not accurate enough to replace the aerial triangulation. Such low accuracy IMU is usually used to navigate the UAS but not to support the aerial triangulation. On the other hand, the GPS antenna in most UAS is a survey grade quality that can receive signals from both GPS and GLONASS. Some of the UAS can receive signals from OMNISTAR with real time corrections.
To Read
- Chapter 11 and 17 of the textbook: Elements of Photogrammetry with Applications in GIS, 4th edition
Ground Control Requirement
Ground Control Requirement ksc17In this section, we will discuss an important topic to any photogrammetric work: ground controls.
A ground control, which we introduced in the last section, is a target in the project area with known coordinates (X,Y,Z). Accurate, well-placed ground controls are essential elements for any photogrammetric project utilizing aerial triangulation.
There are two standard types of ground control points (Figure 7.9), those are:
- Photo Identifiable (Photo ID): This could be any feature on the ground such as a manhole, parking stripe, etc. (the right two images of Figure 7.9). This type of control does not need to be surveyed before the UAS flies the project, as it can be surveyed later on.
- Pre-marked (Panels): This type is generated by marking or painting certain figures or symbols on the ground before the UAS flies the project (the left two images of Figure 7.9). This type of control also does not need to be surveyed before the UAS flies the project as it can be surveyed later on; however, if temporary markers that can be disturbed or moved are used, they should be surveyed ahead of time..
Many projects make use of one type or the other, or a combination of the two.

The leftmost image In Figure 7.9 represents a pre-marked control point set on black and white fabric, while the image next to it represent a pre-marked control point that is spray-painted on a sidewalk. The rightmost images represent different types of photo identifiable ground control points. On these images, the user can pick any visible ground feature (such as a parking strip or edge of where the concrete meets the asphalt pavement on a bridge) to use as a control point.
There are two techniques to survey ground control points. The most common one is using RTK GPS techniques, as it is the fastest and least expensive. RTK survey results in a horizontal accuracy of about 2cm and about 3cm vertical accuracy. RTK survey is widely used for mapping projects. The second survey technique which is much more expensive is differential leveling for height determination and static GPS for horizontal survey. Differential leveling results in around 1cm vertical accuracy. Here in the United States, surveying a point using RTK GPS usually costs between $150.0 and $300 depending on the location and terrain. Differential leveling costs around $1,000 to $2,000 per point, again depend on location and terrain. Selecting one type of surveying technique versus another depends on the expected mapping product's accuracy. Consult the American Society of Photogrammetry and Remote Sensing (ASPRS) Positional Accuracy Standards for Digital Geospatial Data and chapter 9 to stand on the accuracy requirement of the ground control based on product accuracy.
Ground control requirements vary from one project to another depending on the project specifications and its geographic extent. Projects with high geometrical accuracy requirements require more ground controls. Figure 7.10 illustrates typical distribution of ground controls in a rectangular shaped project when the aircraft does not carry on board a GPS antenna, resulting in a non-GPS supported aerial triangulation, or what is usually called “conventional aerial triangulation.”

However, most aerial triangulation today is solved with airborne GPS data. Having GPS data in the aerial triangulation process saves a tremendous number of ground controls. Figure 7.11 illustrates the low density of ground controls required for GPS-based aerial triangulation.

Despite having ground controls only at the edges of the flight line as shown in Figure 7.11, having few additional controls along the interior of the block (see Figure 7.12) is a wise strategy, especially as high accuracy is expected form the aerial triangulation. Savings can be made in the control survey by replacing most of the ground control points at the edges of flight lines with imagery taken with a flight line perpendicular to the project flight lines at each end of the block (see Figure 7.13). Such additional flight lines that are perpendicular to the normal project flight lines are called “cross flight lines.”

Adding two cross flights (strips) at each edge of the photogrammetric block not only saves on number and cost of the ground control points but it also provides strength to the mathematical model within the bundle block adjustment computations. It helps in modeling and solving GPS and IMU problems.

To summarize the subject of ground control requirement for a block, we start with Figure 7.10, which represents the most control consuming case. That is the case of conventional aerial triangulation, where we do not use GPS on the camera during imagery acquisition. Then comes the most efficient method of aerial triangulation, and that is GPS-based aerial triangulation. Figures 7.11 through 7.13 represent different distribution of ground controls for GPS-based aerial triangulation. Each case has its strength and weakness, however, the configuration in Figure 7.13 represents the most economical way when it comes to the reduction in the ground controls requirement.
To Read
- Chapter 16 of the textbook: Elements of Photogrammetry with Applications in GIS 4th edition
- Overview of the ASPRS Positional Standards
Products Generation
Products Generation ksc17In this section, we will discuss products generated from image-based UAS. Although imagery collected by UAS can be used in a variety of applications in the field of remote sensing, we will focus in this lesson on two main mapping products; those are the ortho photo and the digital elevation model.
Digital Ortho Photo (Ortho Map)
Digital ortho, ortho photo, orthographic image, or ortho map are different names for the same thing. Ortho photo, which I like to call it most of the time, is an image that is corrected (through the process of ortho-rectification) from the effect of terrain relief or sensor tilt to convert it to a unified scale map. Row images taken over variable terrain will have different scales at different locations on the image. Pixels covering the terrain of the ridge of a mountain will cover a smaller spot, as it is closer to the sensor (aircraft), as compared to a pixel covering a valley.
Performing the process of ortho-rectification will sample all these pixels, so each pixel covers exactly the same ground resolution or GSD regardless of where it falls in the image or from which terrain it originated. In other words, ortho-rectification means reprocessing the raw digital image to eliminate the scale variation and image displacement resulting from terrain relief and sensor (camera) tilt.
Because ortho photos are geometrically corrected, they can be used as map layers in GIS, overlaying, management, update, analysis, or display operations. This is a great advantage offered by the ortho photo as compared to the raw imagery.
The five primary ingredients for the ortho photo generation are the following:
- digital imagery;
- digital elevation model or topographic dataset;
- exterior orientation parameters from aerial triangulation or IMU;
- camera calibration report;
- photogrammetric processing software that utilizes collinearity equations.
An ortho photo produced using a digital elevation model for the bare earth (no buildings or trees in it) is usually called “ground ortho.” In ground ortho, the building lean is not removed in the process of ortho rectification, and buildings will appear to lean radially away from the center of the image, as you can see in the image of the World Trade Center in Baltimore on the left side of Figure 7.14. On the other hand, "true ortho" is an ortho where the buildings look as if they are erected straight up or as if you are looking at them from right above the roofs, as is illustrated in the right image of Figure 7.14. True ortho is very useful in urban areas, such as downtowns with tall buildings, as it reveals all the information in the streets and pathways surrounding the buildings. True ortho is computationally intensive and needs three-dimensional models of all buildings in the image, which makes it more costly than ground ortho.

It is very important to evaluate the quality of ortho-rectification, as it may cause some defects. Examples of such common defects are the following:
- Image Completeness:
- Root cause: Image not adequately covered by DEM.
- Image Smearing:
- Root cause:
- anomalies or spike error in DEM;
- excessive relief.
- Root cause:
- Double image on adjacent ortho sheets
- Root cause:
- improper camera orientation;
- inaccurate DEMs.
- Root cause:
- Missing Image
- Root cause:
- improper camera orientation;
- inaccurate DEMs.
- Root cause:
- Mismatch of two adjacent orthos
- Root cause:
- inaccurate camera position and orientation;
- inaccurate DEMs.
- Root cause:
Digital Terrain Data
Similar to LiDAR, stereo imagery can be used to generate accurate digital elevation models. Most software used for UAS data processing has the capability of image matching technique to produce fine quality elevation models that can be used for the ortho rectification process and other terrain modeling purposes. The main ingredients for digital terrain data generation are:
- digital imagery;
- exterior orientation parameters from aerial triangulation or IMU;
- camera calibration report;
- photogrammetric processing software that utilizes the image matching technique.
Until recently, users did not trust the poor quality of the auto-correlated digital terrain data. However, in the last couple of years, software development companies adopted a new algorithm called “Semi Global Matching” or SGM that results in fine quality elevation data that in some ways competes with the elevation model generated by LiDAR. This made users excited again about using imagery for the development of a fine quality digital elevation data. The SGM algorithm is a new image matching approach that originated in the computer vision community. It utilizes auto-correlation matching technique based on aggregates per-pixel matching costs that was not possible with the old auto-correlation algorithms.
As it is in ortho photo production, digital elevation data needs to be evaluated to stand on the quality of the data.
There are a couple of terms that are used in the geospatial community to describe digital terrain data, those are:
- Digital Surface Model (DSM): It is also called reflective surface. Such surface represents the original LiDAR data before any feature such as buildings and trees are removed from it. It also represents the elevation model generated from the image auto-correlation process in photogrammetry. Both LiDAR and image auto-correlation collect data on top of natural ground surfaces such as terrain and trees and man-made materials such as buildings and other structures (Figures 7.15 and 7.16 below).


- Digital Terrain Model (DTM): DTM is a term usually associated with digital elevation models of just the ground (trees and man made structures are removed.) DTM is sometimes augmented with 3-D modeling of abrupt changes in the terrain using 3-D lines called break lines. DTM usually contains arbitrary distributed elevation points (not at equal space or grid) called mass points and break lines.
- Digital Elevation Model (DEM): DEM is a term usually associated gridded digital terrain model or points are distributed at equal interval or grid.
- Triangulated Irregular Network (TIN): The term TIN is used to describe the method that most software uses to model the digital terrain data and to present it on the screen. TIN surface represents a set of adjacent, non-overlapping triangles computed from irregularly spaced data points, with x, y horizontal coordinates and z vertical elevations (Figure 7.17).

To Read
- Chapter 13 of the textbook: Elements of Photogrammetry with Applications in GIS, 4th Edition
Summary and final tasks
Summary and final tasks ksc17Summary
Congratulations! You have just completed Lesson 5. I hope that you appreciated the value of the UAS imagery in producing geospatial data that is suitable for many applications in our day to day life. Ortho photo and digital elevation model are indispensable tools used in many environmental and engineering projects. Without them, we would have to put many boots on the ground to survey the terrain and provide the necessary data for engineering and planning. Practicing with the processing software Pix4D, which I selected for the course, will help you tremendously in appreciating the quality and value of the digital ortho photo and digital elevation model.
Final Tasks
| 1 | Study Lesson 5 materials and the text books chapters assigned to the lesson |
|---|---|
| 2 | Start your first post for the discussion on "Human Elements of UAS " Participate in the "Human Elements of UAS" Discussion Forum Post your opinion on the following topic and respond to at least two of your peers' postings: Considering all elements that make a functioning UAS, one may think that the human element is the most important element of a UAS implementation. The human element complements and interacts in one way or another with most other UAS elements such as aerial vehicle, command and control, payloads, data and communication links, and launch and recovery. With the rapid pace of advancement in technology, one may expect that the importance of the human element will diminish as UAS technology is more mature and the UAS becomes more advanced.
(3 points or 3%) Due date for this assignment is at the end of Lesson 6. |
| 3 | Submit your "CONOP and Risk Assessment" assignment report |
| 4 | Complete Lesson 5 Quiz |
| 5 | Start UAS Data Processing Using Pix4D for exercise 2 |
| 6 | Submit final project idea |
| 7 | Attend the weekly call and Exercise 2 training on Thursday evening at 8:00pm ET |