Lesson 9: Civilian and Commercial Applications of the Unmanned Aerial System
Lesson 9: Civilian and Commercial Applications of the Unmanned Aerial System szw5009Lesson 9 Introduction
Lesson 9 IntroductionWelcome to Lesson 9! In this lesson, you will become familiar with the different applications that the UAS is utilized for. The list of commercial and civilian applications is increasing by the day. It is difficult, if it is not impossible, to nail down such a list. The low cost and easy deployment of the UAS encouraged many people to utilize the unmanned aircraft to replace manned aircraft for their activities. Users are discovering new applications every day; however, we will only cover in this lesson the most obvious one. We will not cover the military application, as it is obvious, but we will consider, for the purpose of this lesson, the security and surveillance use of UAS as a civilian/commercial application since some of such services are offered commercially. Much of the commercial and scientific use of UAS that concerns us is in the field of geospatial data acquisition for remote sensing activities. The term “geospatial data” refers to any dataset that is referenced spatially (i.e., geolocated or geo-referenced) with known coordinates systems and datum. I expect from you in this lesson to read chapter 6 of the textbook Introduction to Unmanned Aircraft Systems and several external readings I will point out in the lesson notes.
Lesson Objectives
At the successful completion of this lesson, you should be able to:
- recognize different applications of the UAS for civilian use;
- understand how the UAS data is used for different applications;
- compose a list of additional applications that can be served by UAS.
Lesson Readings
Course Textbooks
- Chapter 4 of the textbook:: Introduction to Unmanned Aircraft Systems, 2nd edition
- Chapter 20 of the textbook: Elements of Photogrammetry with Applications in GIS, 4th edition
- Chapters 8 to 19 of the textbook: Fundamentals of capturing and processing drone imagery and data
Web Articles
- Gahran, A. “Fighting fire with data, spacecraft, drones"
Google Drive (Open Access)
- Chao, H., et al., "AggieAir: Towards Low-cost Cooperative Multispectral Remote Sensing Using Small Unmanned Aircraft Systems"
- Read the lecture slides on Digital Image Classification
Lesson Activities
- Study lesson 9 materials on CANVAS/Drupal and the text books chapters assigned to the lesson
- Submit your Final Project Report and Presentation Slides
- Start your first post for the discussion on "The UAS and Ethics"
- Submit materials for exercise 3 - Digital Image Classification
- Attend the weekly call on Thursday evening at 8:00pm ET
- Watch the webinar: "Tech Talk: Applying Drones to Surveying and Engineering Projects Today."
- Watch the video: "Smart Drones for Large-Scale Surveying | LiDAR & AI Make Construction 10x Faster"
Digital Image Classification for Land Use Land Cover (LULC) Assessment - Part 1
Digital Image Classification for Land Use Land Cover (LULC) Assessment - Part 11. Raster Images and Digital Imagery
Raster images, also known as digital images, consist of a matrix of individual pixels arranged in rows and columns. Each pixel contains a digital number (DN), which quantifies the intensity of electromagnetic energy detected by a remote sensing sensor at a specific location. This pixel-based structure is essential for remote sensing applications, as it enables detailed statistical analyses at the pixel level across various spectral bands. By leveraging raster data, analysts can examine and interpret patterns, trends, and characteristics within imagery, facilitating the extraction of valuable information for geographic information systems (GIS) and spatial analysis.

2. Digital Image Classification
Image classification involves analyzing each pixel within a raster image and assigning it to a specific land cover category, such as forest, water, agricultural fields, or urban areas. This procedure transforms the raw spectral data collected by remote sensing sensors into practical and interpretable datasets that can be integrated into geographic information systems (GIS) and used for spatial analysis. The overarching aim is to generate thematic maps or information layers that reveal land cover patterns, distribution, and changes, enabling informed decision-making in resource management, urban planning, environmental monitoring, and other geospatial applications.

Suggested additional readings on image classification: https://gisgeography.com/image-classification-techniques-remote-sensing/
The process utilizes one or more of the following recognition types:
- Spectral pattern recognition: When decision rules are based on spectral radiance characteristics of the scene.
- Spatial pattern recognition: When decision rules are based on geometric characteristics of the scene (i.e. shape, size, patterns)
- Temporal pattern recognition: uses time as an aid in feature identification
- Object-oriented classification: involve combined use of both spectral and spatial recognition
3. Pattern Recognition in Classification
Image classification utilizes a variety of pattern recognition methods to accurately categorize land cover types. These methods include analyzing spectral information, such as pixel intensity values; examining spatial characteristics like shapes and textures within the image; evaluating temporal patterns by observing how pixel values change over time; and employing object-based strategies that assemble individual pixels into coherent, meaningful groups or objects. This multi-faceted approach enhances the ability to distinguish and classify diverse features present in digital imagery.
4. Spectral Signatures
Spectral signatures characterize the statistical properties of a land cover type by examining its response in multiple spectral bands. These signatures typically summarize features such as the average (mean) pixel values, the degree of spread (variance), and sometimes the relationships between bands (covariance). By capturing these patterns, spectral signatures provide a foundation for distinguishing different land cover categories within digital imagery, enabling accurate classification and analysis.
5. Informational vs Spectral Classes
Informational classes are categories defined by the user based on specific interests or objectives, such as types of land cover or land use. In contrast, spectral classes are groups of pixels that have been clustered together solely based on their statistical properties in the image data, without regard to their real-world meaning. One of the primary difficulties in digital image classification is establishing a clear correspondence between these statistically determined spectral classes and the user-relevant informational classes, as the relationship between them is not always direct or obvious.
6. Spectral Variability
It is common for a single land cover category to exhibit several distinct spectral subclasses within a raster image. This diversity arises from factors such as varying angles of sunlight (illumination), differences in the density of vegetation cover, the presence of multiple species within the same category, and fluctuations in moisture levels. As a result, pixels representing the same informational class can display a wide range of spectral responses, making it more challenging to accurately assign them to the correct category during the classification process. Figure 2 illustrates an example on hierarchy tree of spectral subclasses within an information class.

7. Unsupervised Classification
Unsupervised classification operates by automatically sorting image pixels into distinct groups based on their shared statistical characteristics, without prior knowledge of land cover types. Figure 3. This process relies on clustering algorithms, such as ISODATA, which repeatedly analyze and adjust pixel groupings to improve the internal consistency of each cluster. After the algorithm has established these preliminary clusters, an analyst reviews the results and assigns meaningful land cover labels to each group, linking them to real-world categories. This approach is particularly useful when no reference data is available, but it requires careful interpretation to ensure accurate correspondence between statistical clusters and actual land cover classes.

Pros:
- No extensive prior knowledge of the region required
- Opportunities for human error are minimized
- Unique classes are recognized as distinct units
- Logistically less cumbersome
Cons:
- Natural groupings do not necessarily correspond nicely with desired information classes
- No control over the menu of classes and their specific id
- Spectral properties of informational classes vary over time, and relationships between information and spectral classes change, making it difficult to compare unsupervised
- Classes from one image/date to another
8. Supervised Classification
Supervised classification involves utilizing labeled training data from areas of land cover that have been accurately identified on the image. From these reference sites, the classifier computes statistical descriptors—such as means and variances—for each class. These statistics serve as a model to evaluate and assign class membership to every unknown pixel in the image. Among the various supervised classification techniques, the Maximum Likelihood classifier is widely adopted due to its effectiveness at considering both the center and spread of the class distributions when determining the most probable category for each pixel.
Pros:
- An analyst controls the selected menu of informational classes or categories tailored for a specific purpose and geographic region
- Tied to specific areas of known identity
- Can evaluate results with additional training areas
Cons:
- An analyst imposes a classification structure on the data (which may not match the natural spectral clusters that exist)
- training data defined based on informational categories and not on spectral properties (may have important variation in the forest)
- Careful selection of training areas is time and labor-intensive
- training areas may not encompass and subsequently represent special or unique categories that don’t fit the information classes
9. Training Data Requirements
To ensure reliable classification outcomes, training sites must be carefully chosen to be internally consistent (homogeneous), well distributed across the study area, and large enough to include an adequate number of pixels that accurately capture the statistical properties of each land cover class. If the training data are poorly selected—such as being too small, unrepresentative, or clustered in a limited area—the resulting classification will suffer in accuracy and may misrepresent the actual distribution of land cover types in the imagery.
10. Classification Workflow
The standard process for image classification generally begins with the development of spectral signatures, where representative samples are selected to capture the statistical characteristics of each land cover class. Following signature generation, the classification algorithm assigns each pixel in the image to the most likely land cover category based on these statistical models. Once classification is completed, post-classification filtering is applied to smooth out noise, reduce isolated misclassifications, and enhance the spatial coherence of the results. The workflow concludes with an accuracy assessment, where the classified map is systematically compared against reference data to evaluate its performance. It is important to recognize that every stage in this workflow—signature development, classification, post-processing, and assessment—carries the risk of introducing errors, which can accumulate and influence the overall reliability of the final classification outcome.
11. Improving Classification Accuracy
Classification accuracy can be further enhanced through several strategies. Segmenting the image into meaningful regions prior to classification helps reduce within-class spectral variability and improves the coherence of mapped classes. Integrating supplementary GIS information—such as elevation data, soil maps, or land use records—provides valuable context that supports more precise class assignments. Utilizing imagery captured at different times (multitemporal data) allows the detection of seasonal or phenological changes, which aids in distinguishing between land cover types that may appear similar in a single image. Additionally, employing sophisticated classification algorithms—including artificial neural networks and fuzzy logic techniques—enables the modeling of complex, non-linear relationships in the data, thereby increasing the robustness and reliability of the classification results.
12. Spatial Resolution Effects
When the spatial resolution of an image is increased, it becomes possible to distinguish much smaller details and individual features within the landscape. However, this enhanced detail also means that elements like shadows, surface roughness, or minor variations in texture are more likely to be captured within a single land cover class. As a result, pixels that are supposed to represent the same class—such as a forest or an urban area—may exhibit greater differences in their spectral signatures. This added variability within the class can complicate the classification process, making it harder to achieve consistent and accurate grouping of similar land cover types across the image.
Digital Image Classification for Land Use Land Cover (LULC) Assessment - Part 2
Digital Image Classification for Land Use Land Cover (LULC) Assessment - Part 213. Land Use vs Land Cover
Land cover refers to the observable physical elements present on the Earth's surface, such as vegetation, water bodies, bare soil, and built structures. In contrast, land use pertains to how humans utilize these areas—for example, agriculture, urban development, recreation, or conservation. While land cover can typically be identified directly from satellite or aerial imagery based on spectral and spatial patterns, determining land use often requires additional contextual information. This may involve incorporating supporting GIS datasets, such as zoning maps, infrastructure records, or land management plans, to accurately interpret the intent and function of each area. Because land use is tied to human activities and decisions, it cannot always be reliably inferred from surface appearance alone, necessitating the integration of ancillary data for comprehensive analysis.
14. Classification Schemes
Classification schemes establish organized, multi-level frameworks for systematically organizing and labeling types of land cover. Examples include the USGS Land Use and Land Cover (LULC) system, the International Geosphere-Biosphere Programme (IGBP) classification, and the National Wetlands Inventory. These schemes ensure consistency and comparability of land cover categories across different projects and regions by providing clear definitions and hierarchical groupings for each class.
Example on Land Use and Land Cover Classification using Supervised Classification
Example: Waterfowl management unit:
Given: Two cover types: cattail (CT) marsh, smartweed (SW) moist soil, Single band
Find: Use a maximum likelihood classifier to classify the following hypothetical image:

Formula for normal distribution (Likelihood Values):

Where,
μ= Mean, σ = standard deviation x = spectral value
Solution:
1. Calculate spectral values from cattail and smartweed training fields

2. Compute the likelihood values for cattail using the normal distribution formula
| Spectral Value | Likelihood |
|---|---|
| 10 | 0.00003 |
| 15 | 0.0009 |
| 20 | 0.011 |
| 22 | 0.022 |
| 24 | 0.039 |
| 26 | 0.074 |
| 30 | 0.080 |
| 32 | 0.074 |
| 34 | 0.058 |
| 36 | 0.039 |
| 38 | 0.022 |
| 40 | 0.011 |
| 45 | 0.009 |
| 50 | 0.00003 |
3. Compute the same values for smartweed
4. Plot the two values to create likelihood curves

5. Assign each candidate pixel to a cover class that had the highest likelihood
Example:
A pixel value 20 would have a smartweed likelihood value of 0.080, a cattail likelihood value of 0.011
Decision: the pixel would be classified as smartweed

15. Accuracy Assessment
Accuracy assessment involves systematically comparing the classified map to trusted reference data to determine how well the classification process has performed. This evaluation uses several key metrics: overall accuracy, which measures the percentage of correctly classified pixels across the entire map; producer’s accuracy, which indicates the likelihood that a reference site is correctly mapped (and highlights omission errors, where features are missed); and user’s accuracy, which reflects the probability that a pixel labeled as a particular class actually represents that class on the ground (addressing commission errors, where features are incorrectly included in a class). By analyzing these accuracy measures, one can better understand both the strengths and limitations of the classification results and identify areas needing improvement.
15.1 Error (Confusion) Matrix
An error matrix, also known as a confusion matrix, systematically matches each pixel’s assigned classification with its actual ground truth category. By doing so, it reveals not only the frequency and types of misclassification errors but also highlights which land cover classes are most commonly mistaken for one another. This comprehensive comparison provides a statistical basis for evaluating the reliability of the entire map, pinpointing specific weaknesses and helping guide improvements in future classification efforts.
15.2 Error (Confusion) Matrix
Before we dive into the sample confusion matrix, we need to understand the main statistical terms involved in the process:
Overall Accuracy: Proportion of all correctly classified samples out of the total samples. Overall Accuracy tells you how well your classifier performed across all classes.
Producer’s Accuracy (Recall): For a given class, the proportion of actual samples correctly classified. Reflects omission errors. Producer’s Accuracy shows how well each class was detected (sensitivity).
User’s Accuracy (Precision): For a given class, the proportion of predicted samples that are actually correct. Reflects commission errors. User’s Accuracy shows how reliable each class label is (precision).
Omission Errors: Occur when an item that truly belongs to a class is left out by the classifier. High omission error indicates many missed true instances (false negatives). If omission error for Buildings is 0.08 (8%), it means 8% of all true buildings were missed.
Commission Error: Occurs when an item is incorrectly included in a class. High commission error indicates many false positives. If the commission error for Buildings is 0.11 (11%), it means 11% of points labeled as buildings are not buildings.
An error
15.2 Step-by-Step Example: Classification Accuracy Calculation
15.2.1 Confusion Matrix Setup
Suppose you classified an image into three classes: Buildings, Water, and Vegetation. After comparing your predictions to ground truth, you get the following confusion matrix:
| Actual \ Predicted | Buildings | Water | Vegetation |
|---|---|---|---|
| Buildings | 92 | 3 | 5 |
| Water | 4 | 88 | 8 |
| Vegetation | 7 | 8 | 87 |
- Rows: Actual class (ground truth)
- Columns: Predicted class (by classifier)
- Diagonal cells: Correctly classified samples
- Off-diagonal cells: Misclassifications
15.2.2 Calculate Overall Accuracy (OA)
Formula:
- Correct predictions = sum of diagonal cells = 92 (Buildings) + 88 (Water) + 87 (Vegetation) = 267
- Total samples = sum of all cells = 92 + 3 + 5 + 4 + 88 + 8 + 7 + 6 + 87 = 300
Calculation:
15.2.3 Calculate Producer’s Accuracy (Recall) for Each Class
Formula:
- Buildings:
- Water:
- Vegetation:
15.2.4 Calculate User’s Accuracy (Precision) for Each Class
Formula:
- Buildings:
- Water:
- Vegetation:
15.2.5 Calculate Omission and Commission Errors
- Omission Error (for Buildings):
(8% of true buildings were missed)
- Commission Error (for Buildings):
(10.7% of points labeled as buildings are not buildings)
15.2.6 Summary Table of Metrics
Class | Producer’s Accuracy | User’s Accuracy | Omission Error | Commission Error |
| Buildings | 0.92 | 0.893 | 0.08 | 0.107 |
| Water | 0.88 | 0.907 | 0.12 | 0.093 |
| Vegetation | 0.87 | 0.87 | 0.13 | 0.13 |
Overall Accuracy = 0.89 or 89%
17.2.7 Interpretation of Results
- Overall Accuracy tells you how well your classifier performed across all classes.
- Producer’s Accuracy shows how well each class was detected (sensitivity).
- User’s Accuracy shows how reliable each class label is (precision).
- Omission/Commission Errors help identify where your classifier is missing or mislabeling classes.
For the results of our example, buildings are classified with high recall and precision, indicating few missed buildings and few false positives. Water has the highest precision, meaning most predicted water points are correct. Vegetation is slightly lower but still strong in both metrics.
15.3 Example 2 on Accuracy Calculation
You assessed classification accuracy and tabulated your value in the following matrix:

From that error matrix, one can summarize the classification accuracy as:

In interpreting the above table, while the producer may claim that 94% of the time an area that was lawn on the ground was identified as such on the map, the user finds only 88% of the time the map says an area that is lawn will be shrubs on the ground.
To Read
To Do
- Submit materials for Digital Image Classification
The Different Application of the UAS
The Different Application of the UASIn this section, you will become familiar with and understand the different civilian and commercial applications of the UAS as it stands today. UAS applications that concern us the most are the remote sensing applications. Here, the UAS is replacing manned aircraft as an acquisition platform. Remote sensors such cameras and LiDAR systems are shrunk in size and weight to make them more suitable for the lightweight small UAS as was mentioned in the Payload section of Lesson 2. Remote sensing applications derived from sensors onboard a UAS are more or less similar to the applications that one can expect from a manned system. Manned aircraft can carry larger and heavier payload, which open the door for additional applications that required large sensors such as IFSAR. Reported applications for the UAS include the following:
- Remote Sensing Applications
- Precision Agriculture: Precision agriculture is the most widely used civilian application of the UAS. Farmers and the agricultural community are very optimistic about the prospect of using UAS for their daily activities. The following articles should provide you with a fairly decent idea of the topic:
- Range Land Management
- Landslides Research: Engineers are using UAS for land monitoring and management. This field is also witnessing a promising future with the use of the UAS for their daily repetitive monitoring activities.
- Ocean and coastal Research
- Contaminant Spills and Pollution
- Landfill Mapping and monitoring
- Engineering and Surveying
- Corridor Mapping
- Mining site mapping
- Crop and aquaculture farm monitoring
- Mineral exploration
- Spectral and thermal analysis
- Critical infrastructure monitoring, including power facilities, ports, and pipelines
- Commercial photography, aerial mapping and charting, and advertising
- Disaster response, including search and support to rescuers, in situations such as:
- fires,
- floods and hurricanes,
- landslides
- Medical Supplies Delivery
- Traffic monitoring, and
- Other environmental control and monitoring.
- General Applications and Services
- media resources
- security awareness
- communications and broadcast, including news/sporting event coverage
- cargo transport
Details on some of these applications are given in chapter 6 of the textbook and assigned readings listed below. Try to visit the UAV Applications in this site, as it has interesting information about different aspects of the UAS and its applications. Another way to explore potential applications of UAS-derived products is to look into the different applications of Geographic Information System (GIS) as they are closely related. In this regard, ESRI published on their website a good educational overview to highlight the different applications of GIS.
To Read
- Chapter 4 of the textbook:: Introduction to Unmanned Aircraft Systems, 2nd edition
- Chapter 20 of the textbook: Elements of Photogrammetry with Applications in GIS, 4th edition
- Chapters 8 to 19 of the textbook: Fundamentals of capturing and processing drone imagery and data
To Do
Watch the webinar: "Applying Drones to Surveying and Engineering Projects Today"
UAS for Disaster Response
UAS for Disaster ResponseIn this section, you will become familiar with a widely used application of the UAS: the UAS for disaster response.
One of the widely utilized applications for the UAS is for disaster response situations. UAS is particularly useful for tasks that include one or all the 3 Ds -- dirty, dangerous, and dull:
Dirty: is much open to interpretation and to operation environment, but it can be described by flying over oil, nuclear, or gas installation sites where accidents have occurred, such as the Japanese Fukushima Daiichi nuclear plant, to take air samples or imagery.
Dangerous: refers clearly to situations where a pilot in a similar mission could become a casualty due to dangerous operations.
Dull: is when repetitive tasks are required over and over again. An example of the dull mission is border surveillance and maritime patrols that need eyes in the sky for hours at a time.
For UAS to suitably serve disasters, it needs more capabilities besides its adaptation for the 3 D’s factors. Such capabilities are defined by survivability, durability, and adaptability.
Survivability: Survivability of a UAS in a disaster response scenario relies on its efficient system of communications. For a UAS search and rescue mission, the UAS should consider three forms of communications. Those are:
- communication between the UAS operator and the UAS;
- communication between the operator and the victims on the ground;
- communication between other rescue ground machines and their teams.
Durability: The system's ability to survive a harsh or unpredictable operation environment such as unpredictable dropping debris, changing environment and loss of signal. UAS operation designers in such environments usually relay on multi-level UASs. As an example of this is the utilization of a High Altitude Long Endurance (HALE) UAS in the operation to carry equipment, provide a backup communication link, and to provide a high altitude overview of the site to plan emergency exits routes.
Adaptability: The ability of a mini-UAS with its small size to overcome fallen debris and unpredictable narrow spaces while maintaining its ability in sensing changes in an unpredictable and uncertain environment.
As examples of the use of UAS for disaster response, we will single out the UAS use for forest fire disasters.
UAS for Forest Fires:
Remote sensing techniques have proven to be very effective in mapping and monitoring fires and in giving feedback to first responders. Satellite remote sensing has limited capabilities in supporting fire response. This is due to the fact that the most available satellites have limited spatial resolutions (limited details) and they only occasionally orbit over the fire site, while fire monitoring needs continuous (24/7) coverage. However, satellite imagery can be useful in monitoring fires on a regional or national level, but not on a fire-front micro level. Thermal imagery from MODIS sensors on board the Terra and Aqua satellites with a resolution of 1 km were used by the U.S. Department of agriculture Forest Service Active Fire Mapping Program to monitor regional fires across the U.S. Beside the coarse resolution of its imagery, MODIS orbit any location only twice daily, which is infrequent for tracking the evolution of the fire and to support firefighters in real time.
Alternative to satellite imagery, aerial imagery from manned and unmanned aircraft is frequently used to provide needed frequent aerial observations of a fire. Two approaches were utilized in using the UAS for fire monitoring. The first High Altitude Long Endurance system (HALE) UAS can fly high and provide imagery with better resolution and better frequency than satellites. However, HALE UAS is expensive to procure and to maintain.
The second approach uses fleets of small UAS working cooperatively to provide more detailed information on the fire and its perimeter. In some cases, both approaches are utilized together with the HALE providing an overview image of the fire while small UASs are used to transmit high definition imagery in real time for the perimeter areas of the fire.
Here in the U.S., several wildfire monitoring programs have been adopted over the years. An example of such programs is the joint cooperation between NASA, General Atomics Aeronautical Systems, Inc. and various government agencies involved in fire research. The project used the General Atomics ALTUS II UAS, which is the civilian version of the Predator. Among the sensors on board the ULTUS II payload was a thermal multispectral scanner. Imagery was transmitted to the ground station through INMARSAT geostationary satellites. Once the imagery is received at the ground station, it goes through the geo-referencing and ortho-rectification processes, which convert them to a geo-referenced map before it goes into the hands of the field team. NASA published images (Figure 8.3) of the Grass Valley/Slide fire near Lake Arrowhead/Running Springs in the San Bernardino Mountains of Southern California acquired by the thermal-infrared imaging sensors on board NASA's Ikhana unmanned research aircraft. For more information on past NASA collaborative efforts in the field of different applications for UAS, visit UAS Integration in the NAS.

To Read
- CNN article “Fighting fire with data, spacecraft, drones."
- Read the paper "Towards Low-cost Cooperative Multispectral Remote Sensing Using Small Unmanned Aircraft Systems."
- Read the article Commercial Drone Applications Rapidly Expanding as a Huge Spotlight is Currently Shining on Drone Industry
- Review the presentation slides NASA Research to Expand UAS Operations for Disaster Response.
UAS Challenges in Certain Applications
UAS Challenges in Certain ApplicationsIn this section, we will discuss operational challenges in using the UAS for certain applications.
So far, we have read and discussed materials about the successful utilization of the unmanned aircraft for a variety of applications. However, some of such applications are found to be challenging due to different reasons. Among such reasons are the following:
- FAA hesitates in allowing UASs to fly during natural disaster situations such as floods and hurricanes. This is mainly due to the fact that operating a UAS during a storm lacks alternative communications capabilities. During storms, the air traffic control capabilities in the affected area are usually limited, risking the safety of the UAS, which usually operates without sense-and-avoid instruments.
- UAS offers many advantages over conventional of traffic monitoring and transportation planning for police, emergency responders, and DOT. UAS can move from one location to another with higher speed and is not restricted to specific routes that are usually used by ground vehicles. In addition, UAS can fly through hazardous or inclement weather conditions. However, UAS used for traffic monitoring is challenged in urban canyon areas where visibility of the traffic on the ground is obscured by high rise buildings.
- Small UASs cannot maintain their flying routes during stormy conditions. The lightweight of the UAS makes it venerable to gusty winds.
- Here in the U.S., it is difficult to obtain the FAA proper approval to fly civilian projects whenever there are people in the project area, even after the issuance of Part 107. Such restriction is expected to be diminishing in the future as the FAA continues its efforts to integrate the UAS into the NAS.
To Read
- Chapter 6 of the textbook: Introduction to Unmanned Aircraft Systems, 2nd edition
Summary and final tasks
Summary and final tasksSummary
Congratulations! You have just finished Lesson 9, Civilian and Commercial Applications of the Unmanned Aerial System. You may notice that the use of UAS for civilian applications extends to almost any applications offered by manned aircraft. In fact, the UAS provides more opportunities than the manned aircraft. The UAS, with its small maneuvering size and its low-cost operation, makes them more useful and more affordable than manned aircraft. That is very true for small projects and projects that may involve hazardous operational conditions. UAS applications are expanding, and we hear about new applications every day. Amazon, for example, recently unveiled plans for UAV package delivery service. What do you think the coolest application is that the UAS should be used for and that no one has thought about until now? Post your opinion in the discussion form.
Final Tasks
Activities
Discussion Forum: UAS and Ethics
UAS is capable of collecting very high definition/resolution imagery of people backyards and perhaps through windows. The public in the United States expressed two main opinions about allowing UAS to fly over populated areas especially if it is used for surveillance and search and rescue missions. Express your opinion on the two following public stances on the topic:
1) "The main threat to personal privacy posed by the ever-expanding use of UAVs in U.S. airspace is the substantial potential for violations of the protection against unreasonable search and seizure ensured by the Fourth Amendment to the Constitution." therefore, we should limit or prevent the use of UAS for such purposes.
2) Using UAS for such purposes should be allowed as it is no different from allowing low-flying helicopter on imagery acquisition mission over populated areas or having someone in a neighborhood carrying high resolving power pair of binoculars.
Post your opinion on the discussion board. Respond to at least one posting from your peers. (3 points or 3%)
Deadline for this assignment is on the 5th day of lesson 10.
| 1 | Study lesson 9 materials on CANVAS/Drupal and the text books chapters assigned to the lesson |
|---|---|
| 2 | Complete quiz 9 |
| 3 | Submit your Final Project Report and Presentation Slides |
| 4 | Start your first post for the discussion on "The UAS and Ethics" |
| 5 | Submit materials for exercise 3 - Digital Image Classification |
| 6 | Attend the weekly call on Thursday evening at 8:00pm ET |