Lesson 4: UAS Mission Planning and Control

Lesson 4: UAS Mission Planning and Control sxr133

Lesson 4 Introduction

Lesson 4 Introduction mjg8

Welcome to Lesson 4! In this lesson, you will practice planning and designing a UAS mission. For this lesson, we will focus on imaging sensor (digital cameras), as it is widely used for geospatial projects. Successful execution of any mapping project requires a tremendous amount of planning prior to mission execution. Planning must be done by an experienced person who is familiar with all aspects of mapping. Mission planning includes the following categories:

  1. defining products specifications;
  2. studying area maps;
  3. planning the aerial imagery;
  4. planning the ground controls;
  5. selecting procedures, personnel, and production instruments;
  6. estimating costs;
  7. developing a delivery schedule.

You will understand and become familiar with the main parameters that need to be considered when selecting a UAS for geospatial business activities. You will also recognize the main manufacturers of UAV, aerial acquisition sensors, and processing software. There are not many materials in the course textbooks that directly deal with these subjects, but one can indirectly derive some information from them. In addition, several research studies were conducted by private or public groups on the status of market and future prediction.

Unmanned Aerial Vehicles (UAVs) are becoming the most dynamic growth sector, and based on a research study conducted by the Teal Group Corporation, it is expected that the global UAV market will top US $54 Billion in the next decade or so.

Lesson Objectives

At the successful completion of this lesson, you should be able to:

  • understand basic requirements for mission planning;
  • understand sensor internal geometry;
  • describe factors affecting flight plans such as way points, product resolution and accuracy, aircraft speed, etc.;
  • practice flight planning for a UAS mission;
  • understand calibration requirements for imaging sensors and auxiliary systems.
  • understand the major considerations in selecting a UAS for geospatial business;
  • differentiate between the main providers of UAS;
  • discriminate between the main providers of aerial sensors for UAS;
  • recognize the main providers of software for UAS data processing.

Lesson Readings

Course Textbooks

  • Chapters 3,11, 12, and 18 of the textbook: Elements of Photogrammetry with Applications in GIS, 4th edition
  • Chapters 4 and 8 of the textbook: Fundamentals of capturing and processing drone imagery and data

Google Drive (Open Access)

Lesson Activities

  • Study lesson 4 materials on CANVAS/Drupal and the textbook chapters assigned to the lesson
  • Complete quiz 4
  • Complete your discussions for the assignment on  "SWOT Analysis."
  • Continue working on the "CONOP and Risk Assessment" report assignment
  • Practice Mission Planner software
  • Submit your Pix4D processing materials for exercise 1
  • Attend the weekly call and the Mission Planner software training on Thursday evening at 8:00 pm ET

Studying Area Maps

Studying Area Maps ksc17

In this section, you will understand the value of studying area maps for a project prior to the development of the flight plan.

Flight planners should acquaint themselves with the project area through two types of maps before proceeding with further steps of the design; those are U.S. Topo Quadrangle Maps and Sectional Aeronautical Charts.

U.S. Topo Quadrangles Map

The U.S. Topo Quadrangles Map, mainly a topographic map, shows the details of the contours of the land (terrain elevation). See Figure 4.1. This type of map reveals all information that a planner needs about the topography in the project area. Topography affects flight plan parameters such flight lines, spacing, and imagery spacing. Quad maps can be downloaded from the USGS. You can also review a sample of such maps for the State College area.

example of US Topo Quadrangle Map
Figure 4.1 Sample of US Topo Quadrangles Map
Source: USGS

Sectional Aeronautical Chart

Sectional Aeronautical Charts, which are also called VFR charts (Figure 4.2), are described as “the primary navigational reference medium used by the VFR pilot community. The 1:500,000 scale Sectional Aeronautical Chart Series is designed for visual navigation of slow to medium speed aircraft. The topographic information featured consists of the relief and a judicious selection of visual checkpoints used for flight under visual flight rules. The checkpoints include populated places, drainage patterns, roads, railroads, and other distinctive landmarks. The aeronautical information on Sectional Charts includes visual and radio aids to navigation, airports, controlled airspace, restricted areas, obstructions, and related data. These charts are updated every six months, most Alaska Charts annually. To better understand these charts, review the FAA “Aeronautical Chart User Guide”.  You can also watch this YouTube video on learning how to read the sectional charts:

The VFR acronym is adopted from “Visual Flight Rules” where a pilot relies on the visual see-and-avoid rule during flight. To download such charts, visit the FAA site.

example of a sectional aeronautical chart
Figure 4.2 Sample Sectional Aeronautical Chart
Source: FAA

The topographic map and the aeronautical chart provide an overview of the area and the contents of the ground cover (both natural and man-made), restricted airspace such as airport approaches, high towers, etc.

Visualize FAA On Line Data and Charts

No less important than visualizing a sectional chart, is to utilize the online FAA sites and other services, which allow you to zoom in to your geographic location to stand on the airspace status and the allowed flights ceiling. Here are a couple of the free services available to the public:

  1. Visualize it: See FAA UAS Data on a Map
  2. B4UFLY

To Read

  1. Chapter 4 and 8 of the textbook: Fundamentals of capturing and processing drone imagery and data
  2. Section 18-10 of Chapter 18 of  Elements of Photogrammetry with Applications in GIS, 4th edition

Sensors Characteristics

Sensors Characteristics ksc17

Focal Plane and CCD Array

The focal plane of an aerial camera is the plane where all incident rays coming from the object are focused. The focal plane is where the film of a film-based camera is placed. With the introduction of digital cameras, the focal plane is occupied by the CCD array, replacing the film.

A digital camera like the ones we use at home is called a “digital frame” camera just to distinguish it from other designs of digital cameras such as “push broom” cameras. Digital frame cameras have the same geometric characteristics as the film camera that employs the film as the recording medium.

A digital frame camera consists of a sensor that is a two-dimensional array of charge-coupled device (CCD) elements (CCD is also called pixel). The sensor is mounted at the focal plane of the camera. When an image is taken, all CCDs of the sensor are exposed simultaneously, thus producing a digital frame. Figure 4.3 (from Wolf, page 75) illustrate how a digital camera captures an area on the ground that falls within the lens' field of view (FOV).

The size of a digital camera is measured by the size of its sensor. The higher number of CCDs (pixels) in the sensors, the bigger and more expensive the camera is. If a camera has a sensor with 4000 pixels by 4000 pixels, it is called a 16 megapixels camera. That is because it has 16,000,000 pixels. UAS imaging productivity, i.e. how many acres the UAS can cover in an hour, depends on the sensor size, battery life,  and the lens focal length. The article "DJI Phantom 4 RTK vs. WingtraOne" clearly illustrates the difference between UAS productivity based on sensor and UAS capabilities. In that article, you will also learn about some fundamental capabilities that we usually expect from a mapping drone.

Lens Cone

The lens for a mapping camera usually contains compound lenses put together to form the lens cone. The lens cone also contains the shutter and diaphragm.

Compound Lens

The lens is the most important and most expensive part of a mapping aerial camera. Cameras on board of the UAS are not of that level of quality, as they were not manufactured to be used as mapping cameras. Mapping cameras are called metric cameras, and are built so that the internal geometry of the camera holds its characteristics despite harsh working conditions and changing operational environments. Lenses for cameras on board of the UAS are small in size and lighter in weight. They are also less expensive than standard mapping cameras. Lenses for mapping cameras should be calibrated to determine the accurate value for focal length and lens distortion (imperfectness) characteristics.

Shutters

Shutters are used to limit the passage of light to the focal plane. The shutter speed of aerial cameras typically ranges between 1/100 and 1/1000 seconds. Shutters are of two types: focal-plane shutters or the between-the-lens shutters. The latter one is the most common shutter used for aerial cameras. Most digital camera shutters are designed according to two mechanisms: the leaf shutter (also called mechanical or global shutter or the dilating aperture shutter) or the electronic rolling shutter (curtain or sliding shutter). The leaf shutter exposes the entire sensor array at once, while the rolling shutter exposes one line of pixels at a time. For aerial imaging from a moving platform such as a UAS, leaf shutter is recommended because it minimizes image blur. To understand the shortcoming of the rolling shutter, watch this video.

It is important to know which shutter is used for your camera as most processing software including Pix4D provide correction for the rolling shutter effect. However, the software does not correct for it automatically, and you will need to activate that option before you start processing the imagery. 

More information on different types of shutter mechanisms can be found on Wikipedia's Shutter (photography) page.

To Read

  1. Chapter 3 of the textbook: Elements of Photogrammetry with Applications in GIS, 4th edition

Geometry of Vertical Image

Geometry of Vertical Image szw5009

In order to understand mission flight planning, you need to understand the geometry of the image as it is formed within the camera. The size of the CCD array and lens focal length, coupled with flying altitude (above ground), determine the image scale or the ground resolution of the image. Therefore, it is essential to the work of the flight planner to have all of this information understood and available before starting to design a mission.

In photogrammetry, we usually deal with three types of imagery (photography). They are defined in terms of the angle that the camera optical axis makes with the vertical (nadir). Those are:

  1. true vertical photography: ±0º from nadir
  2. tilted or near-vertical photography > 0º but less than ±3º – Most used –
  3. oblique photography: between ±35º and ±55º off nadir

For the purpose of this course, we will focus only on the first two types, and those are vertical and near-vertical photography.

Figure 4.3 illustrates the basic geometry of a vertical photograph or image. By vertical photograph or image, we mean an image taken with a camera that is looking down at the ground. As the aircraft moves, so does the camera, and this makes it impossible to take a true vertical image. Therefore, vertical image definition allows a few degrees of deviation from the nadir (the line connecting the lens's frontal point and the point on the ground that is exactly beneath the aircraft). In summary, a vertical image is an image that is either looking straight down to the ground or is looking a few degrees to either side of the aircraft.

Basic geometry of a vertical image: see above for more details
Figure 4.3 Geometry of vertical image
Source: Elements of Photogrammetry with application in GIS, 4th edition, 2014 McGraw Hill

Scale of Vertical Image

As the sun's rays hit the ground, they reflect back toward the camera, and some actually enter the camera through the lens. This physical phenomenon enables us to express the ground-image relation using trigonometric principles. In Figure 4.3, ground point A is projected at image location a' and ground point B is projected at image location b' on the film. From such geometry, the film's four corners, a', b', c', d', cover an area on the ground represented by the square ABCD. Such relations not only enable us to compute the ground coverage of a photograph (image) but also enable us to compute the scale of such a photograph or image.

The scale of an image is the ratio of the distance on the image to the corresponding distance on the ground. In Figure 4.4, the distance on the ground AB will be projected on the image on line ab; therefore, the image scale can be computed using the following formula:

Equation 1: scale =  distance ab distance AB  

Analyzing the two triangles (the small triangle with base ab and the large triangle with base AB) of Figure 4.4, one can also conclude, using the similarity of triangles principle, that the scale is also equal to:

Equation 2: scale =  lens focal length (f) Flying height (H)  

Scale is expressed either in a unitless ratio such as 1/12,000 (or 1:12,000) or in pronounced units ratio such as 1 in. = 1,000 ft (or 1" = 1,000’).

Image scale: see text below for more information
Figure 4.4 Image Scale
Source: Dr. Qassim Abdullah © Penn State University is licensed under CC BY-NC-SA 4.0.

Examples of Scale Computations

The following two examples will walk you step by step through the process of computing scales for imagery produced from a film-based camera and from a digital camera. In digital cameras, the scale does not play any role in defining the image quality, as is the case with film-based cameras. In digital cameras, we use the Ground Sampling Distance (GSD) to describe the resolution quality of the image, while in film-based cameras, we use the film scale.

Scale from Film Camera

Aerial photographs were acquired from an altitude of 6,000 ft AMT (Above Mean Terrain) with a film-based aerial camera with a lens focal length of 6 inches. Determine the scale of the resulting photography.

Solution:

From Figure 4.4 and equations 1 & 2,

Scale = lens focal length (f) Flying height (H) = distance ab distance AB 

Therefore,

Scale = 6 in. 6,000 ft × 12 in/ft = distance ab distance AB 

OR

Scale = 1:12,000 or 1" = 1,000'

Scale from Digital Camera

Scale is meaningless in digital mapping products, as the scale concept was created to represent measured distances on old-day maps, which are plotted on paper. However, people are still using scale, and it would take time before the new generation of mappers embraced the digital representation of the new geospatial products. Digital camera manufacturers provide information on the sensor used in their cameras. Some of them express it as 16 megapixels, which could be a square array of 6,000 × 6,000 pixels or a rectangle with any ratio of width/height, such as 8,000 × 2,000 pixels or a ratio of width/height equal to 4. Some camera manufacturers provide the sensor array size in pixels and in millimeters, and some provide it with a combination of the number of pixels and sensor size in inches, leaving you wondering about the physical size of the CCD; see Figure 4.5. Figure 4.6 illustrates camera information that you need to dig deep into the provided information to obtain what you want. From Figure 4.6, which represents the information provided for the multi-spectral camera on board the DJI Phantom 4 agricultural UAS, you can indirectly derive the sensor dimensions from the given array size in pixels, and the CCD size, or 3 um, is inserted in the focal length information. The sensor dimensions in pixels were not provided directly, and you would need to figure it out from the two values provided for the optical center. The optical center, or the origin of the image coordinates at 0,0, is usually located in the middle, i.e., the center of the array; therefore, the total width of the array is equal to 800 pixels × 2 = 1,600 pixels, while the sensor height is equal to 650 pixels × 2 = 1,300 pixels. Knowing the number of pixels in the width direction, or 1,600, and the pixel size of 3 micrometers, the sensor width can be derived to be equal to 1,600 × 0.003 = 4.8 mm; similarly, the sensor height is equal to 1,300 × 0.003 = 3.9 mm.

The following is an example of calculating the scale for digital imagery acquired using a digital camera:

Aerial imagery was acquired with a digital aerial camera with a lens focal length of 100 mm and a CCD size of 0.010 mm (or 10 microns). The resulting imagery had a ground resolution of 30 cm (1 ft). Determine the scale of the resulting imagery.

Solution:

From Figure 4.4 and equation 1, assume that the distance ab represents the physical size of one pixel or CCD, which is 0.010 mm, and the distance AB is the ground coverage of the same pixel, or 30 cm.

Scale = distance ab distance AB 

Therefore,

Scale = 0.010 mm 30 cm × 10 mm/cm = 0.010 300 = 1 300/0.010 = 1 30,000 

OR

Scale = 1:30,000 or 1"=2,500'

Practice Scale Computation Example:

Aerial imagery was acquired with a digital aerial camera with lens focal length of 50 mm and CCD size of 0.020 mm (or 20 microns). The resulting imagery had a ground resolution of 60 cm (2 ft). Determine the scale of the resulting imagery.

Solution:

Scale = 0.020 mm 60 cm × 10 mm/cm = 0.020 600 = 1 30,000 

Scale = 1:30,000 or 1"=2,500'

Figure 4.5 Camera information for DJI Mavic 2 Pro
Figure 4.5 Camera information for DJI Mavic 2 Pro
Source: DJI

 

Figure 4.6 Camera information for DJI multi-spectral camera
Figure 4.6 Camera information for DJI multi-spectral camera
Data for the table shown in Figure 4.6.
Figure 4.6 Camera information for DJI multi-spectral camera
Calibrated Focal Length1919.3333floatpixel5.74[mm] / 3.0[um/pixel]=1913.333...
Calibrated Optical Center X800floatpixelX-axis coordinate of the designed position of optical center
Calibrated Optical Center Y650floatpixelY-axis coordinate of the designed position of optical center
Source: Dr. Qassim Abdullah © Penn State University is licensed under CC BY-NC-SA 4.0.

Imagery Overlap

Imagery acquired for photogrammetric processing is flown with two types of overlap: Forward Lap and Side Lap. The following two subsections will describe each type of imagery overlap.

Forward Lap

Forward lap, which is also called end lap, is a term used in photogrammetry to describe the amount of image overlap intentionally introduced between successive photos along a flight line (see Figure 4.7). Flight 3 illustrates an aircraft equipped with a mapping aerial camera taking two overlapping photographs. The centers of the two photographs are separated in the air with a distance B. Distance B is also called air base. Each photograph of Figure 4.7 covers a distance on the ground equal to G. The overlapping coverage of the two photographs on the ground is what we call forward lap.

This type of overlap is used to form stereo-pairs for stereo viewing and processing. The forward lap is measured as a percentage of the total image coverage. Typical value for the forward lap for photogrammetric work is 60%. Because of the light weight of the UAS, we expect substantial air dynamic and therefore substantial rotations of the camera (i.e., crab); therefore, I recommend the amount of forward lap to be at least 70%.

Imagery forward lap: see text below for more information.
Figure 4.7 Imagery forward lap
Source: Elements of Photogrammetry with application in GIS, 4th edition, 2014 McGraw Hill

Side Lap

Side lap is a term used in photogrammetry to describe the amount of overlap between images from adjacent flight lines (see Figure 4.8). Figure 4.8 illustrates an aircraft taking two overlapping photographs from two adjacent flight lines. The distance in the air between the two flight lines (W) is called lines spacing.

This type of overlap is needed to make sure that there are no gaps in the coverage. The side lap is measured as a percentage of the total image coverage. The typical value for the side lap for photogrammetric work is 30%. However, because of the light weight of the UAS, we expect substantial air dynamic and therefore substantial rotations of the camera (i.e. crab), and therefore I recommend using at least 40% side lap.

Imagery side lap: see text for more information
Figure 4.8 Imagery Side Lap
Source: Elements of Photogrammetry with application in GIS, 4th edition, 2014 McGraw Hill

Image Ground Coverage

Ground coverage of an image is the area on the ground (the square ABCD of Figure 4.3) covered by the four corners of the photograph a'b'c'd' of Figure 4.3. Ground coverage of a photograph is determined by the camera internal geometry (focal length and the size of the CCD array) and the flying altitude above ground elevation.

Example on Image Ground Coverage:

A digital camera has an array size of 12,000 pixels by 6,000 pixels (Figure 4.9). If the physical CCD size is 0.010 mm (10 um) camera, how much area in acres will each image cover on the ground if the resulting ground resolution (GSD) of a pixel is 1 foot?

CCD Array: 12,000 pixels by 6,000 pixels
Figure 4.9 CCD Array
Source: Dr. Qassim Abdullah © Penn State University is licensed under CC BY-NC-SA 4.0.

Solution:

Ground coverage across the width (W) of the array = 12,000 pixels × 1 ft/pixel = 12,000 ft

Ground coverage across the height (L) of the array = 6,000 pixels × 1 ft/pixel = 6,000 ft

Covered area per image = W × L=12,000 ft × 6,000 ft = 72,000,000  ft 2 = 72,000,000 43,560 =1652.892 acres

To Read

  1. Chapters 6 and 18 of the textbook Elements of Photogrammetry with Applications in GIS, 4th edition

Designing a Flight Route

Designing a Flight Route szw5009

In this section, we start the practical work for flight planning an imagery mission. By the end of this section, you should be able to develop a flight plan for an aerial imagery mission. Successful execution of any photogrammetric project requires thorough planning prior to the execution of any activity in the project.

The first step in the design is to decide on the scale of imagery or its resolution and the required accuracy. Once those two requirements are known, the following processes follow:

  1. planning the aerial photography (developing the flight plan);
  2. planning the ground controls;
  3. selecting software, instruments, and procedures necessary to produce the final products;
  4. cost estimation and delivery schedule.

For the flight plan, the planner needs to know the following information, some of which he or she ends up calculating:

  1. focal length of the camera lens;
  2. flying height above a stated datum or photograph scale;
  3. size of the CCD;
  4. size of CCD array (how many pixels);
  5. size and shape of the area to be photographed;
  6. the amount of end lap and side lap;
  7. scale of flight map;
  8. ground speed of aircraft;
  9. other quantities as needed.

Geometry of Photogrammetric Block

Figure 4.8 shows three overlapping squares with light rays entering the camera at the lens focal point. Successive overlapping images form a strip of imagery we usually call a "strip" or "flight line," therefore, a photogrammetric strip (Figure 4.8) is formed from multiple overlapping images along a flight line, while a photogrammetric block (Figure 4.9) consists of multiple overlapping strips (or flight lines).

Geometry of Photogrammetric Strip
Figure 4.8 Geometry of photogrammetric strip
Source: Dr. Qassim Abdullah © Penn State University is licensed under CC BY-NC-SA 4.0.
photogrammetric block consists of multiple overlapping strips
Figure 4.9 Geometry of photogrammetric block with two strips
Source: Dr. Qassim Abdullah © Penn State University is licensed under CC BY-NC-SA 4.0.

Flight Plan Design and Layout

Once we compute the ground coverage of the image, as it was discussed in the "Geometry of Vertical Image" section, we can compute the number of flight lines and the number of images and draw them on the project map (Figure 4.10), aircraft speed, flying altitude, etc.

example of a project map
Figure 4.10 The project map
Source: Dr. Qassim Abdullah © Penn State University is licensed under CC BY-NC-SA 4.0.

Before we start the computations of the flight lines and image numbers, I would like you to understand the following helpful hints:

  • For a rectangularly shaped project, always use the smallest dimension of the project area to lay out your flight lines. This way it results in fewer flight lines and then fewer turns between flight lines (Figure 4.11). In Figure 4.11, the red lines with arrowheads represent flight lines or strips, while the black dashed lines represent the project boundary.

    example of correct flight lines drawn
    Figure 4.11 Correct flight lines orientation
    Source: Dr. Qassim Abdullah © Penn State University is licensed under CC BY-NC-SA 4.0.
  • If you have a digital camera with a rectangular-shaped CCD array, always choose the largest dimension of the CCD array of the camera to be perpendicular to the flight direction (Figure 4.12). In Figure 4.12, the blue rectangles represent images as taken by a camera with a rectangular CCD array. The wider dimension of the array is always configured to be perpendicular to the flight direction (which is the east-west direction for this figure).

    see text above
    Figure 4.12 Correct camera orientation
    Source: Dr. Qassim Abdullah © Penn State University is licensed under CC BY-NC-SA 4.0.

Flight Lines Computations

see text for more details
Figure 4.13 Flight line layouts
Source: Dr. Qassim Abdullah © Penn State University is licensed under CC BY-NC-SA 4.0.

Now, let us start figuring out how many flight lines we need for the project area illustrated in Figure 4.13, to the right. Figure 4.13 shows rectangular project boundaries (in black dashed lines) with length equal to LENGTH and width equal to WIDTH that were designed to be flown with 6 flight lines (red lines with arrowheads). To figure out the number of flight lines needed to cover the project area, we will need to go through the following computations:

  1. Compute the coverage on the ground of one image (along the width of the camera CCD array (or W)) as we discussed in section 4.3.
  2. Compute the flight line spacing as follows:
    Line spacing or distance between flight lines (SP) = Image coverage (W) x (100 – amount of sidelap)/100.
  3. Number of flight lines (NFL) = (WIDTH / SP) + 1.
  4. Always round up the number of flight lines, i.e., 6.2 becomes 7.
  5. Start the first flight line at the east or west boundary of the project.

In Figure 4.13, you may have noticed that the flight direction for each flight line alternates between North-to-South and South-to-North from one flight line to the adjacent one. Flying the project in this manner increases the aircraft's fuel efficiency so the aircraft can stay longer up in the air.

Number of Image Computations

Once we determine the number of flight lines, we need to figure out how many images will cover the project area. To do so, we need to go through the following computations:

  1. Compute the coverage on the ground of one image (along the height of the camera CCD array (or L)) as we discussed in section 4.3.
  2. Compute the distance between two consecutive images, or what we call the “airbase,” B, as follows: Airbase or distance between two consecutive images (B) = Image coverage (H) x ((100 – amount of end lap)/100).
  3. Number of images per flight line (NIM) = (LENGTH / B) + 1.
  4. Always round up the number of images, i.e., 20.2 becomes 21.
  5. Add two images at the beginning of the flight line before entering the project area and two images upon exiting the project area Figure 4.14 (it is needed to ensure continuous stereo coverage), i.e., a total of 4 additional images for each flight line, or number of images per flight line = (LENGTH / B) + 1 + 4.
  6. Total number of images for the project = NFL x NIM.

Figure 4.14 is the same as Figure 4.13 with added blue circles that represent photo centers of the designed images. The circles are only given to one flight line, and I will leave it to your imagination to fill all the flight lines with such circles.

Refer to text above for details
Figure 4.14 Imagery layout
Source: Dr. Qassim Abdullah © Penn State University is licensed under CC BY-NC-SA 4.0.

Flight Altitude Computations

Flying altitude is the altitude above a certain datum the UAS flies during data acquisition. The two main datums used are either the average (mean) ground elevation or the mean sea level. Figure 4.15 illustrates the relationship between the aircraft and the datum and how the two systems relate to each other. In Figure 4.15, we have an aircraft that is flying at 3,000 feet above average (mean) ground elevation, represented by the blue horizontal line in the figure. We also have the mean terrain elevation (the blue horizontal line), situated at 600 feet above the mean sea level. Therefore, the flying altitude will be expressed in two ways, those are:

  1. if we want to use the terrain as a reference, we will express it as flying altitude = 3,000 feet above mean terrain, or AMT;
  2. if we use the sea level, we will express it as Flying Altitude = 3,600 feet above mean sea level (ASL or AMSL).
Figure illustrating flying altitude above datum
Figure 4.15 Flying Altitude
Source: Dr. Qassim Abdullah © Penn State University is licensed under CC BY-NC-SA 4.0.

We now need to determine at what altitude the project should be flown. To do so, we go back to the camera's internal geometry and scale, as we discussed in section 4.3. Assume that the imagery is to be acquired with a camera with a lens focal length of f and with a CCD size of b. We also know in advance what the imagery ground resolution, or GSD, should be. The flying altitude will be computed as follows:

Scale = lens focal length (f)Flying height (H)=distance abdistance AB

OR

lens focal length (f)Flying height (H)=bGSD

From which, H can be determined:

H=f × GSDb

H =  lens focal length f  × GSD / ab 

Here, we need to make sure that both f and ab are converted to have the same linear unit, in which case the resulting altitude will be in the same linear unit of the GSD. If we assume the following values:

f = 50 mm

ab = 0.010 mm (or 10 um)

GSD = 0.30 meter

The flying altitude will be:

H= 50 mm × 0.30 meter 0.010 mm =1,500  

The flying height is 1,500 meters above ground level.

Aircraft Speed and Image Collection

Controlling the aircraft speed is important for maintaining the necessary forward or end lap expected for the imagery. Fly the aircraft too fast, and you end up with less forward lap than anticipated, while flying the aircraft too slowly results in too much overlap between successive images. Both situations are harmful to the anticipated products and/or the project budget. A little amount of overlap reduces the capability of using such imagery for stereo viewing and processing, while too much overlap results in too many unnecessary images that may affect the project budget negatively. In the previous subsections, we computed the airbase, or the distance between two successive images along one flight line that satisfy the amount of endlap necessary for the project. Computing the time between exposures is a simple matter once the airbase is determined and the aircraft speed is decided upon.

Computing the time between two consecutive images

When the camera exposes an image, we need the aircraft to move a distance equal to the airbase before it exposes the next image. If we assume the aircraft speed is (v) therefore the time (t) between two consecutive images is calculated from the following equation:

Time (t) =  Airbase (B) Aircraft Speed (v) 

For example, if we computed the airbase to be 1,000 ft and we used aircraft with a speed of 150 knots, the time between exposures is equal to:

 Time  ( t )=1 , 000  ft 150  knots  

=  1 , 000  ft 150  knots × 1.15  miles / hour 

=  1 , 000  ft 172.5  miles / hour  

= 1 , 000  ft 172.5  miles / hour  ×  5 , 280  ft / mile 

=  0.0010979  hours 

=  3.95  sec 

Waypoints

In the navigation world, waypoints are defined as “sets of coordinates that identify a point in physical space.” Close to this definition is the one used by mapping professionals, and that involves using sets of coordinates to locate the beginning point and the end point of each flight line. Waypoints are important for the pilot and camera operator to execute the flight plan. Waypoints in manned aircraft imagery acquisition are usually located a couple of miles outside the project boundary on both sides of the flight line (i.e., a couple of miles before approaching the project area and a couple of miles after exiting the project area, or for UAS operations, it would be a couple of hundred meters before approaching the project area and a couple of hundred meters after exiting the project area). The pilot uses waypoints to align the aircraft to the flight line before entering the project area. In UAS operation, a "waypoint" marks the beginning or the end of a flight line where the UAS either positions itself before starting to take pictures or ends taking pictures on a certain flight line.

Example of Flight Plan Design and Layout

A project area is 20 miles long in the east-west direction and 13 miles in the north-south direction. The client asked for natural color (3 bands) vertical digital aerial imagery with a pixel resolution or GSD of 1 ft using a frame-based digital camera with a rectangular CCD array of 12,000 pixels across the flight direction (W) and 7,000 pixels along the flight direction (L) and a lens focal length of 100 mm. The array contains square CCDs with a dimension of 10 microns. The end lap and side lap are to be 60% and 30%, respectively. The imagery should be delivered in TIFF file format with 8 bits (1 byte) per band or 24 bits per color for three bands (RGB). Calculate:

  1. the number of flight lines necessary to cover the project area if the flight direction was parallel to the east-west boundary of the project. Assume that the first flight line falls right on the southern boundary of the project;
  2. the total number of digital photos (frames);
  3. the ground coverage of each image in acres;
  4. the storage requirements in gigabytes aboard the aircraft required for storing the imagery;
  5. the flying altitude;
  6. the time between two consecutive images if the aircraft speed was 150 knots.

Solution:

Looking into the project size (20 × 13 miles) and the one-foot GSD requirements, a mission planner should realize right away that the image acquisition task for such a project size and specifications can only be achieved using a manned aircraft.

The camera should be oriented so the longer dimension of the CCD array is perpendicular to the flight direction (see Figure 4.12).

  1. Number of flight lines necessary to cover the project area:

    Line spacing or distance between flight lines (SP)

    =  image coverage  ( W ) × ( ( 100    the amount of side log ) / 100 ) = ( 12 , 000  pixels  × 1  ft / pixel  ) × ( ( 100    30 ) / 100 ) = 12 , 000  ft × 0.70 = 8 , 400  ft 

    Number of flight lines (NFL)

    = (  project WIDTH  / SP ) + 1

    (with rounding up)

    = ( ( 13  miles  × 5 , 280  ft /  mile  ) / 8 , 400  ft ) + 1 = 8.171 + 1 = 9.171  10  flight lines  

  2. Total number of digital photos (frames):

    Airbase or distance between two consecutive images (B)

    =  Image coverage  ( L ) × ( ( 100    amount of end lap ) / 100 ) = 7 , 000  pixels  × 1  ft / pixel  × ( ( 100  60 ) / 100 ) = 7 , 000  ft × 0.40 = 2 , 800  ft 

    Number of images per flight line

    = ( project LENGTH / B ) + 1 + 4 = ( ( 20  miles  × 5 , 280  ft / mile  ) / 2 , 800  ft ) + 1 + 4 = ( 105 , 600  ft / 2 , 800  ft ) + 1 + 4 = 37.714 + 1 + 4 = 43 

    Total number of images for the project

    = 10  flight lines ×  43  images/fight line   =  430  image (frames)  

  3. Ground coverage of each image in acres:

    Ground coverage of each image

    = W × L = 12 , 000  pixels  × 1  ft × 7 , 000  pixels  × 1  ft = 84 , 000 , 000  ft 2 = 84 , 000 , 000  ft 2 / ( 43 , 560  ft 2 / acre  ) = 1 , 928.37  acres  

  4. The storage requirement for the RGB (color) images:

    Storage requirement for 1 band

    = W × L × 1  byte/pixel  = 12 , 000  pixels × 7 , 000  pixels × 1  byte / pixel  = 84 , 000 , 000  bytes  = 84  Mega bytes  ( Mb ) 

    Each pixel needs one byte per band; therefore, each of the three (R, G, B) bands needs to be accounted for.

    Total Storage requirement

    =  Number of images × Number of bands  × 84  Mb / image  = 430  images × 3 × 84  Mb / image  = 108 , 360  Mb  = 108.360 Giga byte  ( Gb ) 

  5. Flying Altitude (H):

    (H) =  × GSD ab   100 mm × 1 ft 0.010 mm  10,000 ft above terrain
  6. Time between acquisition images:

    Time (t) = 2,800 ft150 knots = 2,800 ft150 knots × 1.15 miles/hour  = 2,800 ft172.5 miles/hour = 2,800 ft172.5 miles/hour × 5,280 ft/mile = 0.0030742 hours = 11.067 seconds

Cost estimation and delivery schedule

Past experience with projects of a similar nature is essential in estimating cost and developing a delivery schedule. In estimating cost, the following main categories of efforts and materials are considered:

  • labor
  • materials
  • overhead
  • profit

Once quantities are estimated as illustrated in the above steps, hours for each phase are established. Depending on the project deliverables requirements, the following labor items are considered when estimating costs:

  • aerial photography
  • ground control
  • aerial triangulation
  • stereo-plotting (# of models = # photos -1)
  • map editing
  • ortho production
  • LiDAR data cleaning

The table in Figure 4.16 provides an idea about the going market rates for geospatial products that can be used as guidelines when pricing a mapping project using manned aircraft operation and metric digital camera and lidar. The industry needs to come up with a comparable table based on unmanned operations. There is no good pricing model established for UAS operation, as the standards and product quality are widely variable depending on who offers such services and whether they fall strictly under the "Professional Services" designation. 

Figure 4.16: Examples of the going market rates for geospatial data prices
ProductGSD ftPrice per sq mileComments
Ortho0.5$150-$200Based on large projects
Ortho1.0$80-$100Based on large projects
Ortho2.0$30-$60Based on large projects
lidar3.2$100-$500Depends on accuracy, terrain, and required details

Delivery Schedule

After the project hours are estimated, each phase of the project may be scheduled based on the following:

  • number of instruments or workstations available
  • number of trained personnel available
  • amount of other work in progress and its status
  • urgency of the project to the client

The schedule will also consider the constraints on the window of opportunity due to weather conditions. Figure 4.17 illustrates the number of days, per state/region, available annually for aerial imaging campaigns. Areas like the state of Maine have only 30 cloudless days per year that are suitable for aerial imaging activities.

Color-coded U.S. map to show number of cloudless days by region.
Figure 4.17 Annual number of cloudless days by region
Source: Dr. Qassim Abdullah © Penn State University is licensed under CC BY-NC-SA 4.0.

To Read

Chapter 18 of Elements of Photogrammetry with Applications in GIS, 4th edition

To Do

For practice, develop two flight plans for your project, one by using manual computations and formulas as described in this section and one by using "Mission Planner" software. Compare the two.

Sensors Calibration and Boresighting

Sensors Calibration and Boresighting szw5009

In this section, we will discuss the topics of camera calibration and sensor boresighting.

Camera Calibration

Most existing UASs that are dedicated to photogrammetric imaging carry on board less expensive cameras that we call nonmetric cameras. Nonmetric cameras are cameras with variable interior geometry (i.e., unknown focal length) and with relatively large lens distortion. In order to conduct photogrammetric mapping from the resulting imagery from such cameras, we need to determine to a known accuracy all interior camera parameters such as the focal length and the coordinates of the principal point, and to model the lens distortion.

The principal point of a camera is the point where lines from opposite corners of the CCD array or the lines connecting the opposite mid-way points of the CCD array sides intersect, Figure (4.18). However, when the lens is fitted on the camera body, it is impossible to align the center of the lens and the principal point described above, resulting in offset distances xp and yp as illustrated in Figure 4.18. Those two values are determined in the process of camera calibration that needs to be represented in the photogrammetric mathematical model during computations.

Mapping film camera calibration was usually performed in special laboratories dedicated to this task such as the USGS calibration lab for film cameras, which was shut down permanently on April 1, 2017 after decades of services to the mapping community. However, with the advancements in the computational analytical model in photogrammetry, we can determine the camera parameters analytically through a process called camera self-calibration from within the aerial triangulation process. Most UAS data processing software such as the one used in this course support camera self-calibration.

Internal Camera Geometry - see text above for details
Figure 4.18 Internal camera geometry
Source: Dr. Qassim Abdullah

Sensors Boresighting

The term “boresighting” is usually used to describe the process of determining the differences in the rotations of the sensor (such as camera) rotational axes and the rotational axes of the Inertial Measurement Unit (IMU), which is usually bolted to the camera body. The IMU is a device that contains gyros and accelerometers used in photogrammetry and lidar to sense and measure sensors rotations and accelerations. In photogrammetry where the IMU is used on an imaging camera, the boresight parameters are determined by flying over a well controlled site (site with accurate ground controls) and then conducting aerial triangulation on the resulted imagery.

The aerial triangulation process will compute the six exterior orientation parameters (X, Y, Z, omega, phi, kappa) while the IMU will measure the three orientation parameters' roll, pitch, and heading (or yaw). Comparing the two sets of the orientation angles of the camera as computed by the aerial triangulation and measured by the IMU, one can establish the differences in the rotations of the camera in reference to the inertial system (from the IMU). These differences (or offsets values) will be used to correct all the future IMU-derived orientation to convert the rotation angles from inertia to photogrammetric systems so it will be utilized in the mapping process.

A similar process is followed for determining the offset values for the IMU used in the lidar system. For the lidar offset determination, there is no aerial triangulation used as it follows different processing steps. To determine the boresight offset values in lidar, the lidar has to be flown in a certain configuration over a well controlled site. Figure 4.19 represents an ideal design for lidar boresight determination. From the figure, there are two lines flown in the east-west directions (one flight line flown due east and the other flown the opposite direction, due west) from a certain altitude and two flight lines flown in the opposite direction (north-south) from an altitude that is nearly double the altitude of the east-west flight lines.

LiDAR boresight determination flight design - See text above for details
Figure 4.19 Lidar boresight determination flight design
Source: Qassim Abdullah

To Read

  1. Sections 3-9, 3-10, 3-11, 3-12 of Chapter 3 and sections 11-12 of Chapter 12 of Elements of Photogrammetry with Applications in GIS, 4th edition
  2. Chapter 3 of the textbook: Fundamentals of capturing and processing drone imagery and data
  3. In-Situ Camera and Boresight Calibration with Lidar Data
  4. USGS/OSU Progress with Digital Camera in Situ Calibration Methods

Basic Considerations for Selecting UAS

Basic Considerations for Selecting UAS szw5009

In this section, you will understand the requirements for selecting a UAS. Selecting a UAS depends on many factors that are closely related to the intended use of the UAS. Such use requirements will determine the size and weight of the UAS, and its endurance and range of flight, among other factors. In the following sections, we will briefly discuss each of these factors.

Size and weight

Size and weight a play great role in determining payload size and weight and in limiting its range and endurance. Large UASs have the capability of carrying a larger and heavier payload, including the power source. The larger the UAS, the more fuel or battery power it can carry on board. The more power the UAS can carry on board, the better range and endurance of the UAS.

Range and Endurance

The range of a UAS is an important performance characteristic. It is dependent on a number of basic aircraft parameters and weight of the payload. Maximum UAS range and endurance can be achieved with high propeller efficiency, low fuel consumption, and large onboard fuel (or battery power) capacity. A project that requires long hours in the air will need a larger UAS. However, most UASs that are employed for geospatial mapping purposes now days have an endurance of 90 minutes and a maximum range of around 50 miles.

Stability

In physical mechanics, stability refers to the tendency of an object to stay in its present state of rest or motion despite small disturbances. An aircraft must be stable in order to remain in flight. The forces acting on the aircraft, such as thrust, weight, and aerodynamic forces, have to be in certain directions in order to restore the aircraft to its original equilibrium position after it has been disturbed by a wind or other forces. An aircraft has angular degrees of freedom. Those are rotation around the X-axis or roll, the rotation around the Y-axis, or pitch, and the rotation around the vertical to the ground, or yaw. The aircraft has to remain stable around each of these axes. The most critical rotation is the pitch, and stability about it is called longitudinal stability. Some instability can be tolerated around the roll and the yaw.

Stability is essential for aerial data such as imagery acquisition in order to achieve gap-free imaging results. The use of a gyro-stabilized mount for the camera or the imaging sensor is preferred for mapping missions, as it results in uniform coverage free of gaps.

Cost

UAS costs play a great role in the decision for acquiring one. The price of a large UAS sometime exceeds the price of a typical manned aircraft, such as various models of Cessnas, used for aerial imaging. However, the cost of a UAS is justified by the type of jobs that are expected for the use of the UAS. Smaller UAS-based aerial imaging jobs are only justified through the use of a small UAS that costs under $100,000. It is worth mentioning here that due to strict regulations by the FAA on flying UAS, there are no large jobs for the UAS at the current time within the geospatial mapping community. No one can commercially utilize UASs for money-making projects, therefore only smaller UASs are utilized by the mapping community. Once the FAA eases the regulation, we should expect larger demand for medium or large UAS.

Payload Capacity

The maximum weight that a UAS can carry on board also plays an important role in the decision of UAS selection. Different applications require different sensors and therefore different payload capacities. Current UAS used by the mapping community can carry a payload varying in weight between a few to 100 lbs. The payload capacity directly affects the cost of the UAS, as it limits the range and endurance for the UAS. UAS with longer range and endurance cost more than those that fly a maximum distance of 35 miles and for a period of 60 minutes.

To Read

Read the article "Five Things to Consider when Adopting Drones for Your Business" by Drone Analyst.

To Do

Practice with the use of Pix4D software to process the sample data.

UAS Market Survey

UAS Market Survey szw5009

In this section, you will gain an understanding of the different brands and makers of the UAV, payload sensors, and processing software.

Market survey of the Air Vehicle (UAV)

Large UAS that are used mainly for defense purposes are around for a long time and have sophisticated technologies built into them. Examples of the manufacturers of such UAS are AAI Corporation, AeroVironment, Aurora Flight Sciences, BAE Systems, Boeing, Elbit Systems, General Atomics Aeronautical Systems, Inc., Israel Aerospace Industries, Northrop Grumman, Raytheon, Rotax, Sagem, Selex Galileo, and many others. Within the last decade, many startup companies started manufacturing low-cost UAS that are mainly used for civilian purposes. Examples of those manufacturers are Trimble, Altavian, Sensefly Ltd, American Aerospace Advisors, Prioria, Uconsystem, Idetec, and many more.

The following four resources contain good information on existing systems and manufacturers:

  1. Unmanned Aerial System (UAS) Survey
  2. GIM International Volume 28 Spring 2014
  3. UAS Suppliers

Market survey of Payload Sensors

The sensors required for UAS that are utilized for mapping purposes are mainly limited to cameras (Visible, near-infrared, and thermal infrared). The second resource provided in the previous section offers a list of sensors manufacturers that are used for UAS payloads. UAS payloads used for the mapping community mainly include imaging cameras. Such cameras have a variety of spectral bands such as visible (Red, green, blue), near infrared (NIR) and thermal infrared. There is only one LiDAR system developed mainly for the UAS and that is the VUX-1 manufactured by Riegl, which was described in Lesson 2. The most obvious provider of digital cameras (without endorsing any of them) that are small enough to fit within UAS payloads are the following:

  1. Phase One, with their multiple models of aerial cameras.
  2. Imprex, with their latest model of Bobcat cameras
  3. Nikon, with their multiple models of cameras
  4. Mecasense with their multiple models of multi-spectral cameras or Parrot for their Sequoia camera

Market survey of Processing Software 

For image-based mapping products generation, users will need efficient photogrammetric processing software. Such software should be capable of performing the following operations, among others:

  • organizing the input imagery, camera calibration reports, GPS-derived camera position, IMU-derived camera orientation angles, and ground controls data in a simple database;
  • having user-friendly graphical user interface (GUI);
  • having good data viewers, i.e. orthos and DSM;
  • handling tens of thousands of images per project in TIFF or JPEG formats;
  • image coverage verification through rapid data processing mode;
  • performing automatic aerial triangulation processing using simultaneous bundle block adjustment with viewing and manual editing capability;
  • accepting GPS-derived camera position and IMU-derived camera orientation;
  • camera self-calibration;
  • modeling GPS shift and drift;
  • producing quality control reports;
  • exporting exterior orientation parameters for photogrammetric work station;
  • performing ortho rectification;
  • automatic DSM generation through auto-correlation;
  • performing image mosaic and capability to edit mosaic lines;
  • exporting ortho tiles according to user defined layout in shape file format;
  • performing color balancing and radiometric enhancement;
  • distributed processing (parallel processing) using computing farm;
  • batch processing or scripting.

Among the most obvious data processing software that are optimized for UAS data processing in the market (without endorsing any of them) are the following:

  1. Agisoft Metashape
  2. Pix4DMapper
  3. Menci APS
  4. CORRELATOR3DTM  by SimActive
  5. Trimble Inpho UASMaster

Each of these five software packages meets most of the capabilities listed above. However, some of them may be more suitable than others, depending on the situation and the nature of the project.

To Do

  1. Practice more with the use of Pix4D software to process the sample data. Produce ortho photo and DSMm and send me screenshots of the products.

Summary and Final Tasks

Summary and Final Tasks szw5009

Summary

Congratulations! You have just finished Lesson 4, UAS Mission Planning and Control. I hope that you appreciate the importance of this lesson material in relation to the Concept of Operation for any UAS. UAS projects based on poor planning mean nothing but guaranteed failure or/and poor quality derived products. Computations may seem complicated, but I tried to walk you through the different steps with details. However, if you feel that you are overwhelmed with understanding the design concepts, please do not hesitate to write to me.

Final Tasks

Activities
1 Study lesson 4 materials and the text books chapters assigned to the lesson
2 Complete the Lesson 4 Quiz.
3 Complete your discussions for the assignment on "SWOT Analysis" 
4 Continue working on the "CONOP and Risk Assessment" report assignment 
5  Practice Mission Planner software
6 Submit your Pix4D processing materials for exercise 1
7 Attend the weekly call and the Mission Planner software training on Thursday evening at 8:00pm ET