Lesson 3. Remote Sensing of the Atmosphere

Lesson 3. Remote Sensing of the Atmosphere

Motivate...

By this point in the course, you've already encountered many different weather observations (temperature dew point, wind, etc.). But, the observations we've learned about so far have something in common: They're collected by a sensor in direct contact with the medium being measured (called in situ measurements). Obviously, such measurements aren't possible over the entire breadth and depth of the atmosphere. We can't have weather stations covering every single point on Earth and throughout the atmosphere!

To help fill the many gaps between our direct measurements, we need to measure the atmosphere from afar, or "remotely." Remote sensing is just that -- taking a measurement without having a sensor in direct contact with the medium being measured. As an example, your body contains both in situ sensors (your skin) and remote sensors (your eyes). You don't have to physically touch a red-hot stove element (opens in a new window) to know that it is hot. Your eyes can sense the light coming from the heating coil, and you then make an interpretation that the burner must be hot.

So what types of remote sensing instruments do meteorologists use? I'm sure that you are very familiar with satellite and radar images shown on TV weathercasts and available online or on your favorite weather app. These come from two very important types of remote sensing observations, and we will cover them in depth in this lesson. In addition to radar and common satellite images, many more types of remote sensing data exist, which measure a vast array of atmospheric properties. Although many of these data lie beyond the scope of this course, they all have something in common: All remote sensing data is based on measurements of electromagnetic radiation.

A collage of remote-sensing images.

Meteorologists use a vast array of remote sensing instruments to measure the atmosphere. The key to properly interpreting each data set is to understand the advantages and limitations of the instrument. To aid in this understanding, you must first familiarize yourself with the properties and laws of electromagnetic radiation.
Credit: David Babb

Though the word "radiation" generally carries the tone of dire consequences for much of the public, meteorologists routinely and harmlessly harness part of the broad spectrum of electromagnetic radiation to help them diagnose the present state of the atmosphere and then make predictions. One of the most important things to keep in mind when using remote sensing data is that no perfect, one-size-fits-all remote sensors exist. All remote sensing instruments have limitations! Each type of remote sensing instrument is designed to measure something specific, and often it's not actually what you're actually interested in observing! The measurements taken by remote sensors only become useful when interpreted or converted into the observations that you really desire, but to make this conversion, we have to make assumptions. As in any aspect of life, sometimes assumptions can lead us astray, and ignoring the limitations of remote sensing data is a sure invitation for making mistakes.

Before we get into how to use satellite and radar imagery in weather forecasting, we have to start with the basics of radiation. Though this topic may seem more like physics than meteorology to you, I'd argue that good weather forecasters must understand the underlying science behind satellite and radar imagery in order to effectively and correctly use them. Let's get started!

Lesson Objectives

After completing this lesson, you should be able to:

  • explain what is meant by the electromagnetic spectrum and list what portions of the EM spectrum are used in meteorology remote sensing. (2)
  • describe the four key laws of radiation: Plank's, Wein's, Stefen-Boltzmann, and Kirchhoff's Laws.(2)
  • explain the three fundamental processes that can occur when radiation encounters a medium: transmission, absorption, and scattering.(2)
  • list and explain the major classifications of clouds typically observed in the atmosphere, as well as identify these cloud types from photographs.(1)
  • distinguish between the two basic types of meteorological satellites.(2)
  • explain the process of creating a visible satellite image and correctly interpret visible satellite images.(1,2)
  • explain the process of creating an infrared satellite image and correctly interpret infrared satellite images.(1,2)
  • explain the process of creating a water vapor satellite image and correctly interpret water vapor satellite images.(1,2)
  • explain how radar imagery is created, interpret radar imagery, and explain some meteorological factors that can affect the interpretation of radar imagery.(1,2)
  • distinguish between various types of remote sensing imagery, taking care to only interpret attributes of the atmosphere provided by each image type.(1)

(Numbers denote mapping to course objectives)

mjg8

Shedding Light on the Electromagnetic Spectrum

Shedding Light on the Electromagnetic Spectrum

Prioritize...

At the completion of this section, you should be able to describe what is meant by "electromagnetic radiation" and how it is generated. You should also be able to explain the various types of electromagnetic radiation, specifically the portions of the electromagnetic spectrum that meteorologists use to observe the atmosphere.

Read...

If we're going to talk about remote sensing, we have to start by talking about radiation. While the mention of "radiation" may conjure up thoughts about nuclear reactors or nuclear bombs, it turns out that the scientific use of the term "radiation" is considerably more broad. Radiation is defined as the emission and transfer of energy via high-energy particles (photons) or electromagnetic waves. In fact, the vast majority of radiation that you encounter on a daily basis has nothing to do with nuclear radiation at all. From an everyday light bulb, to the microwave that heats your frozen lunch, to the mobile phone that you use daily, you're surrounded by devices that make use of radiation. Even light from the sun is a form of radiation, so radiation is occurring all around you!

A boy at the edge of a pond making ripples with his hand.

Just as moving your hand back and forth creates ripples on a pond, an oscillating electron creates electromagnetic waves that propagate away from the source.
Credit: Andrew and the Pond more / Ethan Fox / CC BY-NC 2.0 (opens in a new window)

At some point in a science class, you probably studied the electromagnetic ("EM") spectrum of radiation, but how is this electromagnetic spectrum created? To begin with, you probably know that the building blocks of all matter are atoms and molecules. Within these atoms and molecules are smaller particles which have positive and negative charges -- protons and electrons, respectively. These charged particles tend to oscillate or vibrate (especially electrons). Without getting into the details, physics tells us that any charged particle like an electron has an electrical field surrounding it (electrical charges and electrical fields go hand-in-hand). Furthermore, moving charges also possess magnetic fields. Thus, when an electron oscillates, its surrounding electric and magnetic fields change. Like moving your hand rapidly back and forth in a pool of water, oscillating electrons send out ripples of energy (that is, "waves") that have both electrical and magnetic properties (hence, electro - magnetic radiation).

So, how is it that different kinds of electromagnetic waves exist to create an entire spectrum? The wavelength of any wave is simply the distance between two consecutive similar points on the wave (for example from wave crest to wave crest). Now think about our pond analogy above. If you move your hand slowly in the water, you will create a few waves with long wavelengths. However, if you move your hand rapidly in the water, you create lots of waves with very short wavelengths. The same is true for an oscillating electron. If the oscillation is very quick (we say the oscillation has a high frequency), then the EM radiation produced will have a short wavelength. If the oscillation is slower (having a lower frequency) then the electromagnetic waves will have long wavelengths.

Now, the frequency at which electrons oscillate is essentially set by the temperature of the matter in which the electron resides (remember, we defined an object's temperature as the average kinetic energy of its atoms or molecules). The higher the temperature, the higher the frequency of oscillation. So, when temperature increases, the wavelength of the electromagnetic radiation emitted by the electron decreases. Conversely, as temperature decreases, the frequency of oscillation slows and the wavelength of the emitted electromagnetic radiation increases. For a visual, check out the short video below (0:57) demonstrating the relationship between oscillation frequency and wavelength.

PRESENTER: Let’s explore a simple model of how oscillation frequency is tied to the wavelength of electromagnetic radiation.

The frequency at which electrons oscillate is essentially set by the temperature of the matter in which the electron resides. Lower temperatures yield lower frequencies of oscillation. Here, we’ve set our temperature on the low side, and you can see the molecule oscillating fairly slowly, or in other words, at a low frequency. The wavelength of the emitted radiation is also relatively long.

But, when temperature increases, the oscillations get faster, which makes for a higher oscillation frequency. This high frequency means that the emitted electromagnetic radiation has a relatively short wavelength. For comparison again, we can decrease our temperature to watch the oscillation frequency slow, and the wavelength of the emitted radiation increase.

Before leaving this discussion, let me add a quick caveat: We have discussed the generation of EM radiation by a single oscillating charged molecule. In reality, matter exists as a system of charged particles, which means that the resulting electromagnetic radiation field is much more complex than I have outlined here. We defined temperature by the average motion of the molecules because the motion of individual molecules varies and not all molecules have the same energy state. This means than a spectrum of electromagnetic radiation is generated from any system of matter that contains many charged particles, all oscillating at different frequencies. I should also note that the vibrating molecule model for electromagnetic emission only explains the existence of low-energy waves (those having lower frequencies than visible light). High-frequency EM emissions are still generated by moving charges, but require a different mechanism to generate the high energy waves (there's more details in the Explore Further section below if you are interested.)

With that caveat out of the way, now look at the entire spectrum of electromagnetic radiation. First, note that the range in wavelengths for different types of electromagnetic radiation is staggering -- from hundreds of meters to the size of an atom's nucleus. Also note that visible light does indeed qualify as electromagnetic radiation, despite taking up only a tiny sliver of the entire spectrum. This means that human eyes are completely blind to almost all electromagnetic radiation (most wavelengths are invisible to the naked eye).

A chart of the various types of EM radiation along with a comparison of wavelengths to common objects.

The spectrum of electromagnetic radiation. In the long-wave portion of the spectrum, radio and microwaves with wavelengths of hundreds of meters to a few millimeters dominate. As wavelengths decrease to a range of 10s of microns to 1/100th of a micron (the size of a bacterium or virus) we label these emissions as infrared, visible, and ultraviolet light. Finally, in the very short-wave portion of the spectrum, with wavelengths of less than a nanometer (smaller than an individual molecules and atoms), X-ray and gamma ray emissions can be found.
Credit: David Babb

For atmospheric remote sensing, we use electromagnetic radiation in the microwave, infrared, and visible bands. Perhaps most familiar to you is the visible portion of the electromagnetic spectrum. Indeed, wavelengths of EM radiation that span from approximately four tenths of a micron (a micron is one-millionth of a meter) to a little more than seven tenths of a micron compose the part of the spectrum that meteorologists use to generate "visible" satellite images (which we'll cover later in the lesson).

Beyond the longest wavelengths associated with visible light lies the infrared ("beyond red") band of the electromagnetic spectrum. A majority of the infrared spectrum, spanning from approximately 3 to 100 microns, essentially constitutes "terrestrial radiation" because the oscillating charges that emit at these wavelengths are consistent with temperatures commonly observed on this planet as well as the Earth's atmosphere. Thus, terrestrial radiation lends itself to be used in infrared satellite imagery (of which there are several applications we'll study soon).

Microwaves are next in line in the electromagnetic spectrum's hierarchy, with wavelengths spanning from 100 microns to about 30 centimeters. Most radar imagery used in weather forecasting employs artificially produced microwaves ranging in wavelength from 3 to 10 centimeters (more on radar later in the lesson).

Now that you know the terminology behind the different regions of the electromagnetic spectrum, we need to discuss the properties by which objects emit radiation. These properties have been grouped into what I call the "four laws of radiation." Read on.

Explore Further...

As I mentioned previously, the discussion in this section focused on the generation of low-energy electromagnetic waves (those with lower frequencies than visible light). If you want explore further than what I present here, many online sources discuss the various regions of the electromagnetic spectrum. For starters, check out: the Wikipedia page on the electromagnetic spectrum (opens in a new window).

Above the visible portion of the electromagnetic spectrum is the very short wavelength region that includes gamma rays, x-rays and ultraviolet light. The shortest wavelengths belong to gamma rays, which have wavelengths that are as short as one trillionth of a meter (unimaginably small). It turns out that the energy required for matter to emit electromagnetic radiation with wavelengths on the order of a few microns (or less), surpasses that which can be generated by an oscillating molecule. In fact, at such energies, the molecular and atomic bonds may break down completely, leaving only single atoms (or even single electrons!). Therefore, a few new mechanisms are needed to explain very short-wave EM emissions.

Perhaps you remember the Bohr model of an atom (opens in a new window) from high school chemistry that shows the electrons orbiting a nucleus of protons and neutrons (like a mini solar system). Suffice to say, things are a bit more complicated than that, but I'll stick with this model for simplicity. In an unenergized state (called the base state), an atom has a number of electrons in various orbits (or shells) around the nucleus. However, if sufficient energy is added to the atom, one or more of its electrons will be ejected into higher orbits around the nucleus (because they have more energy, they can better overcome the pull of the nucleus). Then, when these electrons fall back down to their original orbit, they must jettison the extra energy. They emit this energy in the form of a photon (a small packet of EM radiation), that has a frequency which corresponds to the energy released. Such photons are typically found in the near-IR, visible, and ultraviolet portions of the EM spectrum.

At even higher temperatures, the electrons may even break their bonds with the atomic nucleus itself, forming what is known as a plasma. Plasmas are a fourth state of matter (not a solid, liquid, or gas) that consist of positive ions (left over atomic nuclei) and free electrons. In a plasma, electromagnetic radiation is generated when the speed or direction of an electron is altered by a positive ion or another electron. Because of the unrestrained nature of electrons within a plasma, they can travel at tremendous speeds and thus can generate very high-energy photons. The generation of ultraviolet waves, X-rays, and gamma rays are typically from plasmas.

Although such high-energy radiation can be generated artificially (the medical use of X-rays, for example), most of the sources for natural high-energy EM emission originate in space. The plasma of our sun emits copious amounts of X-rays and ultraviolet radiation, as well as gamma rays during eruptions of solar flares. Furthermore, the most prodigious gamma-ray bursts come from interstellar events such as supernovae, black holes, and quasars. Check out the image below, which shows gamma ray emission from the entire sky. Note that the strongest gamma ray emissions are concentrated along the disk of the Milky Way Galaxy.

A depiction of the night sky in the gamma ray portion of the EM spectrum.

This all-sky view from the Gamma-ray Large Area Space Telescope (GLAST) reveals bright emission in the plane of the Milky Way (center), bright pulsars and super-massive black holes.
Credit: NASA/DOE/International LAT Team
mjg8

The Four Laws of Radiation

The Four Laws of Radiation

Prioritize...

After completing this section, you should be able to recite and explain the four laws of radiation. Your explanations should contain specific examples because you will be required to apply these laws in your understanding of atmospheric remote sensing.

Read...

In order to best make use of the of information that comes to us via the electromagnetic spectrum, we need to understand some basic properties of radiation. A complete treatment on the subject of radiation theory would take an entire course at least (indeed, folks pursuing a degree in meteorology are usually required to take a Radiative Transfer course). Instead, you just need to know the fundamental principles describing the electromagnetic radiation that originates from an object and how that radiation travels through space (discussed in the next section).

For electromagnetic radiation, there are four "laws" that describe the type and amount of energy being emitted by an object. In science, a law is used to describe a body of observations. At the time the law is established, no exceptions have been found that contradict it. The difference between a law and a theory is that a law simply describes something, while a theory tries to explain "why" something occurs. As you read through the laws below, think about observations from everyday life that might support the existence of each law.

Planck's Law

Planck's Law can be generalized as such: Every object emits radiation at all times and at all wavelengths. Does that surprise you? We know that the sun emits visible light (below left), infrared waves (opens in a new window), and ultraviolet waves (below right), but did you know that the sun also emits microwaves, radio waves, and X-rays (opens in a new window)? Of course, the sun is a big nuclear furnace, so it makes sense that it emits all sorts of electromagnetic radiation. However, Plank's Law states that every object emits over the entire electromagnetic spectrum. That means that you emit radiation at all wavelengths, and so does everything around you!

A view of the sun in the visible and ultraviolet portions of the spectrum.

Two images of the sun taken at different wavelengths of the electromagnetic spectrum. The left image shows the sun's emission at a wavelength in the visible range. The right image is the ultraviolet emission of the sun. Note: colors in these images and the ones above are deceptive. There is no sense of "color" in spectral regions other than visible light. The use of color in these "false-color" images is only used as an aid to show radiation intensity at one particular wavelength.
Credit: NASA/JPL

Now, before you dismiss this statement out-of-hand, let me say that you are not emitting X-rays in any measurable amount (thank goodness!). The mathematics behind Planck's Law hinge on the fact that there is a wide distribution of vibration speeds for the molecules in a substance. This means that it is possible for matter to emit radiation at any wavelength, and in fact it does, but the amount X-rays you're currently emitting, for example, is unimaginably small.

Another common misconception that Planck's Law dispels is that matter selectively emits radiation. Consider what happens when you turn off a light bulb. Is it still emitting radiation? You might be tempted to say "no" because the light is off. However, Planck's Law tells us that while the light bulb may no longer be emitting radiation that we can see, it is still emitting at all wavelengths (most likely, it is emitting copious amounts of infrared radiation). Another example that you hear occasionally on TV weathercasts goes something like this: "When the sun sets, the ground begins to emit infrared radiation..." That's just not how it works. The ground doesn't "start" emitting when the sun sets. Planck's Law tells us that the ground is always emitting infrared radiation (and radiation at other wavelengths), a fact that we'll explore later on in this lesson.

Wein's Law

So, Planck's Law tells us that all matter emits radiation at all wavelengths all the time, but there's a catch: Matter does not emit radiation at all wavelengths equally. This is where the next radiation law comes in. Wein's Law states that the wavelength of peak emission is inversely proportional to the temperature of the emitting object. Put another way, the hotter the object, the shorter the wavelength of maximum emission. You have probably observed this law in action all the time without even realizing it. Want to know what I mean? Check out this steel bar. (opens in a new window) Which end might you pick up? Certainly not the right end... it looks hot. Why does it "look hot?"

Well, for starters, the peak emission for the steel bar (even the part that looks really hot) is in the infrared part of the spectrum. But, the right side of the bar is hotter than the left, and therefore the right side has a shorter wavelength of peak emission compared to the left side. You see this shift in the peak emission wavelength as a color change from red to orange to yellow as the metal's temperature increases. In fact, the right side is hot enough that its peak emission is pretty close to the visible part of the spectrum (which has shorter wavelengths than infrared); therefore, a significant amount of visible light is also being emitted from the steel.

Judging by the look of this photograph, the steel has a temperature of roughly 1500 kelvins, resulting in a max emission wavelength of 2 microns (remember visible light is 0.4-0.7 microns). Here is a chart showing how I estimated the steel temperature (opens in a new window). To the left of the visibly red metal, the bar is still likely several hundred degrees Celsius. However, in this section of the bar, the peak emission wavelength is far into the infrared portion of the spectrum, and no visible light emission is discernible with the human eye.

So, now that we've established Wein's Law, how do we apply it to the emission sources that affect the atmosphere? Consider the chart below, showing the emission curves (called Planck functions) for both the sun and the earth.

A graph of the energy output of the sun versus the earth as a function of wavelength.

The emission spectrum of the sun (orange curve) compared to the earth's emission (dark red curve). The x-axis shows wavelength in factors of 10 (called a "log scale"). The y-axis is the amount of energy per unit area per unit time per unit wavelength. I have kept the units arbitrary because they are quite messy. The important message is that the sun's emission spectrum peaks in the visible spectrum, while the earth's emission spectrum peaks in the infrared (because of Wien's Law).
Credit: David Babb

Note the idealized spectrum for the earth's emission (dark red line) of electromagnetic radiation compared to the sun's electromagnetic spectrum (orange line). The radiating temperature of the sun is nearly 6,000 degrees Celsius compared to the earth's measly 15 degrees Celsius. This means that given its high radiating temperature, the sun's peak emission occurs in the visible light portion of the spectrum, near 0.5 microns (toward the short-wave end of the EM spectrum). That wavelength is also the reason why we see the sun as having a yellow hue. Meanwhile, the earth's peak emission is located in the infrared portion of the electromagnetic spectrum (having longer wavelengths, by comparison).

By the way, even though we see the sun as having a yellow quality because of its peak emission near 0.5 microns, other stars can take on a different look. Some stars in our galaxy are somewhat cooler and exhibit a reddish hue, while others are much hotter and appear blue. The constellation Orion contains the red supergiant Betelgeuse and several blue supergiants, the largest being Rigel and Bellatrix. Can you spot them in this photograph of Orion (opens in a new window)?

Stefan–Boltzmann Law

Look again at the graph of the sun's emission curve versus the earth's emission curve (above), and take note of the energy values on the left axis (for the sun) and right axis (for the earth). The first thing to notice is that the energy values are given in powers of 10 (that is, 106 is equal to 1,000,000). This means that if we compare the peak emissions from the earth and sun we see that the sun at its peak wavelength emits nearly 3,000,000 times more energy than the earth at its peak. In fact, if we add up the total energy emitted by each body (by adding the energy contribution at each wavelength), the sun emits over 180,000 times more energy per unit area than the earth!

I calculated the number above using the third radiation law that you need to know, the Stefan-Boltzmann Law. The Stefan-Boltzmann Law states that the total amount of energy per unit area emitted by an object is proportional to the 4th power of the temperature. You won't need to do any specific calculations with the Stefan-Boltzmann Law, but you should understand that as temperature increases, so does the total amount of energy per unit area emitted by an object (hotter objects emit more total energy per unit area than colder objects). This relationship is particularly useful when we want to understand how much energy the earth's surface emits in the form of infrared radiation. It will also come in handy when we study the interpretation of satellite observations of the earth, later on.

Kirchhoff's Law

In the preceding radiation laws, we have been talking about the ideal amount of radiation that an object can emit. This theoretical limit is called "black body radiation." However, the actual radiation emitted by an object can be much less than the ideal, especially at certain wavelengths. Kirchhoff's Law describes the linkage between an object's ability to emit at a particular wavelength with its ability to absorb ("take in") radiation at that same wavelength. In plain language, Kirchhoff's Law states that for an object with constant temperature, an object that absorbs radiation efficiently at a particular wavelength will also emit radiation efficiently at that wavelength. One implication of Kirchhoff's law is that if we want to measure a particular constituent in the atmosphere such as water vapor, we need to choose a wavelength that water vapor emits efficiently (otherwise we wouldn't detect it). However, since water vapor readily emits at our chosen wavelength, it also readily absorbs radiation at this wavelength, which presents some challenges for our measurements!

We'll look at the implications of Kirchhoff's Law in a later section. For now, we need to wrap-up our look at radiation by examining at the possible fates of a "beam" of radiation as it passes through a medium. Read on.

mjg8

The Roads Traveled Most by Radiation

The Roads Traveled Most by Radiation

Prioritize...

After completing this section, you should be able to describe absorption, transmission, and scattering as they pertain to electromagnetic radiation passing through a medium.

Read...

Unlike the traveler in Robert Frost's poem, The Road Not Taken (opens in a new window), electromagnetic radiation doesn't have much of a choice whenever it encounters objects in its direct path. Indeed, the fate of electromagnetic radiation depends on wavelength and the physical composition of the atoms and molecules in the medium that it is passing through. It is impractical (and impossible) to sort through each atom and molecule in a given object in order to judge its potential effect on the radiation that strikes it ("incident" radiation), so we will consider chunks of matter as whole objects in order to describe their overall effect on incident radiation.

When radiation first encounters some medium (whether it be a collection of gases, a liquid, or a solid), only three things can happen to that radiation:

  • transmission -- the radiation passes through the medium unaffected
  • absorption -- the radiation "beam" gets extinguished within the medium
  • scattering -- the radiation interacts with the medium such that its direction of "travel" changes

In most cases, all three processes can and do occur to some degree. To help you visualize these potential outcomes, check out the brief video (1:59) below:

When radiation encounters some medium, three things can happen to that radiation. One possibility is that the radiation could pass right through medium unaffected, which is called transmission. Now, 100 percent perfect transmission is pretty rare, except within the vacuum of space. Almost always, there’s at least a little energy that isn’t transmitted through unaffected. An example of a medium with a high transmission value is window glass. Visible light passes through a thin sheet of glass mostly undisturbed, which is why we can see objects clearly on the other side. We call such mediums “transparent” while mediums having low transmission values are called “opaque.” I should point out that the transmission properties of a medium depend on wavelength. An object that is transparent in visible wavelengths might be opaque at infrared wavelengths for example.

The next possibility is called absorption. That’s when the radiation effectively gets extinguished within the medium. When absorption occurs, the radiation is taken up by the matter (typically by the electrons of the atoms) and converted to other forms of energy like heat energy. As with transmission, the amount of energy that an object absorbs depends on the wavelength of the radiation and the physical make-up of the object. For example, freshly fallen snow absorbs little direct sunlight, but snow readily absorbs infrared radiation.

The final possibility is called scattering. That’s when radiation interacts with matter in a way that changes its direction of travel. Scattering can occur in all directions, although some directions are preferred, depending on the size and composition of the particles involved in the scattering event. If the radiation encounters a scattering event and continues on in a forward direction, the event is called "forward-scattering." Likewise, objects can also back-scatter radiation, meaning that they redirect the radiation in all directions back toward the source.

Credit: Penn State

I should point out that I'll sometimes use the word "reflection" as a loose substitute for the "back-scattering" (scattering back toward the radiation source) described in the video, but there's a big difference between this loose use of "reflection" and the classic, pure interpretation of "reflection." Pure reflection means that the angle at which radiation strikes an object must equal the angle at which the radiation is redirected from the object (think about how a billiard ball bounces off a bumper on a pool table). Furthermore, in some rare cases, the scattered radiation may retain the exact same direction that it initially had before the scattering event. When this occurs, the scattered light is counted in the "transmission" category (because it seemingly emerged unchanged from the medium).

Now let's see these processes (particularly absorption and scattering) in action in the atmosphere. First, the atmosphere, like snow (as mentioned in the video), is a highly discriminating absorber (it only absorbs certain wavelengths of the electromagnetic spectrum). The plot of absorption spectra by various gases (below) indicates how efficiently certain gases and the atmosphere, taken as a whole, absorb various wavelengths of electromagnetic radiation. To interpret the graph, note the "0 to 1" scale on the left of the plot, indicating zero percent absorption and 100 percent absorption, respectively. At any specific wavelength, the upward reach of the color shading indicates the percentage of absorption by a particular gas (or the atmosphere, taken as a whole).

A graphic to show the possible fates for radiation passing through a medium.

The absorption spectra of various gases in the atmosphere, and of the atmosphere as a whole. The upward reach of each color shading depicts the percentage of absorption by a particular gas (or the atmosphere as a whole).
Credit: David Babb

For example, focus your attention on the row for oxygen and ozone, labeled "O2 and O3." Note, to the left of this label, that nearly 100 percent of the radiation emitted at wavelengths ranging from 0.1 to about 0.3 microns is absorbed. Recall that these wavelengths correspond to potentially dangerous ultraviolet radiation emitted by the sun. Ozone, a gas composed of three oxygen atoms (O3), absorbs much of the incoming ultraviolet radiation. Most of this absorption takes place in the stratosphere, which is a layer that spans from 10 to 30 miles above the Earth's surface. Thank goodness for ozone in the stratosphere! Otherwise, cases of skin cancer and other afflictions associated with overexposure to the sun would likely be much more rampant in our society than they actually are.

Pocket laser with the beam visible because of dust in the air.

You can see this laser beam only because light is being scattered by small dust particles in the air. If no scattering were taking place, all of the light would continue on in its original direction (and would thus not reach the camera lens).

Scattering, on the other hand, makes things look the way they do. You can't see objects if visible light isn't scattered to your eyes. Check out the great example of scattering on the right. A laser produces a highly focused beam of light waves, all traveling in the same direction. However, since you can see the beam, you know that some of the light is being scattered out of the beam towards the camera lens. This scattering is likely produced by small particles of dust in the air.

I should point out that scattering doesn't have to be a one-time event. Often, radiation will enter an object and encounter many (hundreds or thousands) of scattering events before emerging. This is what happens to make clouds appear white on top and darker on the bottom (cue the obligatory storm photo (opens in a new window)). It's also what makes snow, salt, sugar, and milk appear white. Furthermore, multiple scattering increases the time that the radiation resides in the medium (as it bounces around, unable to escape). This longer residence time increases the chance that the radiation will also be absorbed by the medium. A great example is the blue hue that ice can take on. Water (even in frozen form) tends to absorb red light at a faster rate than blue light, so over time with multiple scattering events, more blue light is scattered to our eyes (see below)!

An ice cave in a glacier in which the ice is giving off a blue hue.

Ice cave in Glacier Gray, Torres del Paine National Park, Chilean Patagonia. Multiple scattering and selective absorption within the glacial ice causes the dramatic blue tint.

Now that we have covered the behavior of the spectrum of electromagnetic radiation and how it travels through space, we need to shift gears and focus on something we ultimately want to measure via remote sensing -- clouds. The detection of clouds by satellites plays a crucial role in weather forecasting. In the next section, we will discuss the four different genres of clouds. By knowing the physical features of these clouds, you will be better prepared to identify specific types of clouds using satellite imagery. Read on.

mjg8

Clouds from Bottom to Top

Clouds from Bottom to Top

Prioritize...

At the completion of this section, you should be able to identify and describe the eleven major clouds types. They are: 3 high-level clouds (cirrus, cirrostratus, and cirrocumulus), 2 mid-level clouds (altostratus and altocumulus), 3 low-level clouds (stratus, stratocumulus, and nimbostratus), and 3 vertically developed clouds (fair-weather cumulus, cumulus congestus, and cumulonimbus).

Read...

Weather forecasters regularly look at clouds from above via satellite imagery, but before we interpret clouds on satellite images we need to learn how to classify specific clouds by observing them from the bottom, as we see them from the ground.

From the perspective of an observer standing on the Earth's surface, clouds can be classified by their physical appearance. Accordingly, there are essentially three basic cloud types:

  • Cirrus, which is synonymous with a "streak cloud" (detached filaments of clouds that literally streak across the blue sky).
  • Stratus, which, derived from Latin, translates to a "layered cloud."
  • Cumulus, which means "heap cloud."

As you learned in a previous lesson, meteorologists further classify clouds according to the height of their bases above the earth's surface.

Four Major Cloud Classifications

A wispy high cloud

High clouds observed over the middle latitudes typically reside at altitudes near and above 20,000 feet. At such rarefied altitudes, high clouds are composed of ice crystals.

 
Middle level clouds that look like cotton balls.

Middle clouds reside at an average altitude of ~10,000 feet. Keep in mind that middle clouds can form several thousand feet above or below the 10,000- foot marker. Middle clouds are composed of water droplets and/or ice crystals.

 
A foggy, rainy day at a lake.

Low clouds can form anywhere from the ground to an altitude of approximately 6,000 feet. For the record, fog is simply a low cloud in contact with the earth's surface.

 
A developing thunderstorm cloud.

Clouds of vertical development cannot be classified as high, middle, or low because they typically occupy more than one of the above three altitude markers. For example, the base of a tall cumulonimbus cloud often forms below 6,000 feet and then builds upward to an altitude far above 20,000 feet.

Just by knowing the three basic cloud types (cirrus, stratus, cumulus) and the four classifications (high, middle, low, and clouds of vertical development), along with their corresponding prefixes and suffixes, we can name lots of different types of clouds.

  • High clouds can either be "plain" cirrus, or we can add the prefix "cirro" to a suffix that describes their appearance (cirrostratus for high-altitude, layered clouds; cirrocumulus for high-altitude, "heap" clouds).
  • Middle clouds carry the prefix "alto" and also a suffix that describes their appearance (altostratus for mid-level, layered clouds; altocumulus for mid-level, "heap" clouds).
  • Clouds of vertical development always include the word "cumulus" or the prefix "cumulo," but can have various suffixes or other descriptive modifiers (like "fair-weather cumulus").
  • The names of low clouds have more variation. Low clouds can be referred to as plain "stratus" (if they're smooth and layered) or "stratocumulus" if they have both layered and heap-like characteristics, for example. If low, layered clouds are precipitating, they're called nimbostratus. The prefix "nimbo" comes from "nimbus," which means that this low cloud produces precipitation (note that nimbus can also be used as a suffix, as in cumulonimbus when a cumulus cloud is producing precipitation).

Learning to identify and describe the major cloud types is an important practical skill for any weather forecaster (see the Key Skill and Quiz Yourself sections below). Once you've spent ample time with those tools and are accustomed to looking at clouds from the bottom side, you're ready to look at clouds from the top side and tackle the principles of interpreting clouds on satellite imagery.

Key Skill...

Learning to identify the major cloud types can be a bit daunting. However, with some practice, you'll get the hang of it. To get started, spend some quality time right now going through the following interactive cloud atlas. It has everything you ever wanted to know about the names and descriptions of the eleven major cloud types that you should be familiar with in this course. Move your mouse over each red pin to see an example photo and description of that particular cloud type.

Quiz Yourself...

Feeling confident in your cloud identification skills? Take this quiz to see how you do.

Explore Further...

If you want to explore cloud identification further (or just look at some pretty cloud pictures), check out these online cloud atlases. I should point out that these sites delve into the details of cloud naming, which you are not required to know. Also, while I have explored these sites and found them to be accurate, you may find slight discrepancies in descriptions, etc. In such cases, please defer to descriptions listed in the course text rather than on these sites.

Cloud Atlas hosted by Penn State (opens in a new window): This atlas was created from images in the Karlsruhe Wolkenatlas (opens in a new window) (used with permission from Bernhard Mühr).

UCAR - Cloud Classifications (opens in a new window): This is a fairly exhaustive site on cloud classification.

mjg8

Observing Weather from Space

Observing Weather from Space

Prioritize...

At the end of this section, you should be able to distinguish between geostationary and polar-orbiting satellites. You should also be able to describe their differences and roles in observing the earth, and be able to identify a satellite image as being collected by a geostationary satellite or a polar-orbiting satellite.

Read...

Today, meteorologists have an ever-increasing number of sophisticated, computerized tools for weather analysis and forecasting. But, before 1960, meteorologists drew all their weather maps by hand and no useful computer models existed. Seems like the dark ages, right? Furthermore, before 1960, forecasters did not have weather satellites to afford them a birds-eye view of cloud patterns. The dark ages ended after NASA launched Tiros-I on April 1, 1960.

An early view of the earth from space taken by the Tiros satellite (pictured at right).

(Left) The first televised image from space captured by the TIROS-1 satellite (pictured right) on April 1, 1960.
Credit: NASA

Though the unrefined, fuzzy appearance of this image may seem crude and almost prehistoric, it was an eye-opener for weather forecasters, paving the way for new discoveries in meteorology (not to mention improved forecasts). Today, satellite imagery with high spatial resolution (opens in a new window) allows meteorologists to see fine details in cloud structures. For example, check out this close-up loop of the eye of Hurricane Ian making landfall in Florida in 2022 (opens in a new window). We've come a long way, wouldn't you agree?

Two types of flagships exist in the select fleet of weather satellites that routinely beam back images of Earth and the atmosphere -- geostationary satellites and polar-orbiting satellites.

Geostationary Satellites

Artist's rendering of GOES-16 in orbit.

An artist's rendering of GOES-16 in orbit.
Credit: NASA

Geostationary satellites orbit approximately 35,785 kilometers (22,236 miles) above the equator, completing one orbit every 24 hours. Thus, their orbit is synchronized with the rotation of the Earth about its axis, essentially fixing their position above the same point on the equator (hence the name "geostationary"). In the United States, the National Oceanic and Atmospheric Administration's (NOAA) geostationary satellites go by the name of "GOES" (Geostationary Operational Environmental Satellite) followed by a number. To get an idea of what a geostationary satellite looks like, check out the artist's rendering of GOES-16 on the right.

Two operational geostationary satellites currently orbit over the equator at 75 and 135 degrees west longitude, and, respectively, go by the generic names "GOES East" and "GOES West." GOES-East is in a good spot to keenly observe Atlantic hurricanes as well as weather systems over the eastern half of the United States. GOES-West is in better position to observe the eastern Pacific and the western half of the United States. If you are interested in learning more about the current condition of any particular GOES satellite, you can check out the GOES Spacecraft Status (opens in a new window) page run by the NOAA's Office of Satellite Operations.

From their extremely high vantage point in space, GOES-East and GOES-West can effectively scan about one-third of the Earth's surface. Their broad, fixed views of North America and adjacent oceans make our fleet of geostationary satellites very effective tools for operational weather forecasters, providing constant surveillance of atmospheric "triggers" that can spark thunderstorms, flash floods, snowstorms and hurricanes (among other things). Once threatening conditions develop, the broad, fixed view of geostationary satellites is especially handy because we can create loops of geostationary satellite imagery, which allow forecasters to monitor the movement of weather systems and other atmospheric features. For example, this loop of GOES satellite images (opens in a new window) from the afternoon of April 8, 2024 shows the movement of clouds across the United States. The dark spot that moves across the image is the shadow cast by a total solar eclipse (opens in a new window) (a rare feature to find on satellite imagery)!

Geostationary satellites are far from perfect, however. Geostationary satellites don't have a very good view of high latitudes because they're centered over the equator. Therefore, clouds at high latitudes become highly distorted and at latitudes poleward of approximately 70 degrees, geostationary satellites become essentially useless.

I don't want to leave you with the impression that the GOES program is unique, however. Other countries also own and operate geostationary weather satellites. For more on these satellite programs, check out the Explore Further section below.

Summary: Geostationary satellites provide fixed views of large areas of the earth's surface (a large portion of an entire hemisphere (opens in a new window), for example). The fact that their view is fixed over the same point on earth means that sequences of their images can be created to help forecasters track the movement and intensity of weather systems. The primary limitation of geostationary satellites is that they have a poor viewing angle for high latitudes and are essentially useless poleward of 70 degrees latitude.

Polar-Orbiting Satellites

Polar-orbiting satellites pick up the high-latitude slack left by geostationary satellites. In the figure below, note that the track of a polar orbiter runs nearly north-south above the earth and passes close to both poles, allowing these satellites to observe, for example, large polar storms (opens in a new window) and large Antarctic icebergs (opens in a new window). Polar-orbiting satellites orbit at an average altitude of 850 kilometers (about 500 miles), which is much, much lower than geostationary satellites.

Each polar orbiter has a track that is essentially fixed in space, and completes 14 orbits every day while Earth rotates beneath it. So, polar orbiters get a worldly view, but not all at once! Like making back-and-forth passes while mowing the lawn, these low-flying satellites scan the Earth in swaths (opens in a new window) roughly 2,500 to 3,000 kilometers wide, covering the entire earth twice every 24 hours.

A scaled drawing of earth, encircled by polar orbiting and geostationary satellites.

The orbits of geostationary and polar-orbiting satellites (drawn to scale).
Credit: David Babb

The appearance of a "lawn-mowing-like" swath against a data-void, dark background on a satellite image is a dead give-away that it came from a polar orbiter, as illustrated by this image from a polar-orbiter of Hurricane Michael in the Gulf of Mexico (opens in a new window) (credit: Johns Hopkins University (opens in a new window)) in early October, 2018. But, sometimes it's harder to tell whether an image came from a polar orbiter because some images are zoomed in enough that the swath can't be seen, like this image from a polar-orbiter of Hurricane Idalia in the Gulf of Mexico in late August, 2023 (opens in a new window). Polar orbiters are invaluable tools for tropical weather forecasters, providing a variety of specialized images to forecasters at the National Hurricane Center (opens in a new window) in Miami, Florida that they use to analyze storms during hurricane season.

NOAA operates polar-orbiting satellites through its Joint Polar Satellite System (JPSS). NOAA currently classifies the newest satellite as its "operational" polar orbiter, while slightly older satellites that continue to transmit data are classified as "secondary" or "backup" satellites. As a counterpart to the GOES satellites, the NOAA Office of Satellite Operations operates a JPSS Spacecraft Status (opens in a new window) page as well. NASA and the U.S. Department of Defense also operate many polar orbiters. All in all, thousands of polar-orbiting satellites are circling the earth in "low-earth orbit" sending back valuable data for everything from weather observation to communications applications to space-oriented research.

Summary: Polar-orbiting satellites orbit at a much lower altitude than geostationary satellites, and don't have a fixed view since the earth rotates beneath their paths. The benefit of polar-orbiters is that they can give us highly-detailed images, even at high latitudes. The main drawback is that they have a limited scanning width, and don't provide continuous coverage for any given area (like geostationary satellites do). A single image from a polar orbiter will often show a swath with sharply defined edges (opens in a new window) that mark the boundaries of what the satellite could see on a particular pass.

Data from satellites has truly revolutionized weather analysis and forecasting. Satellites can measure atmospheric temperatures, moisture, and winds, among other things. Roughly 80 percent of all data used to run computer forecast models comes from polar orbiting satellites alone, so satellites are a critical part of weather forecast operations around the globe! Now that you have some background about the different types of satellites providing crucial weather data, we'll turn our attention to interpreting basic types of satellite images.

Explore Further...

As I mentioned above, the GOES program is not unique, and other countries also own and operate geostationary weather satellites (check out this international perspective on geostationary weather satellites (opens in a new window)). But, geostationary satellites don't just cover weather. More than 600 geostationary satellites hover above the equator around the world! With the number of communications satellites increasing, the "geostationary parking lot" is getting pretty crowded. If you look at the time-lapse photograph below, which was taken by a telescope atop Kitt Peak in Arizona between 0230Z and 11Z on March 19, 2007 and covers just 9 percent of the geostationary orbit, you can see many bright dots, which are geostationary satellites. Keep in mind that hundreds of geostationary satellites have been launched since this time-lapse photo was taken, so "geostationary parking spots" are starting to come at a premium!

Star trails on a long exposure photograph. Geostationary satellites are seen as points rather than streaks.

A time lapse of a small portion of the geostationary orbit taken from atop Kitt Peak in Arizona from 0230Z to 11Z on March 19, 2007. The lines represent star trails, while the bright dots mark the positions of geostationary satellites.
Credit: Dave Dooling, National Solar Observatory

How do I know those dots are geostationary satellites? Well, when photographers take time-lapse images of the nighttime sky, the stars leave "star trails" (check out this time-lapse photograph above Mauna Kea (opens in a new window) in Hawaii and note the awesome star trails; by the way, moonlight illuminated the mountain and sky). Of course, the stars don't move. Rather, the earth rotates about its axis and thus the stars appear to move. Now look closely at the time-lapse of the nighttime sky over Mauna Kea. Note that you don't see the stars themselves, only their trails. In other words, you don't see stars as fixed dots because the Earth rotates on its axis during the period of the time-lapse photography.

That means, of course, that the bright, fixed dots in the midst of the belt of star trails are in geosynchronous orbit with the earth (they obviously didn't move during the time-lapse photography). I emphasize here that there's no way that the light reflected by the geostationary satellites would be sufficiently bright to see them clearly on just a single snapshot, but the long exposure allows them to stand out on this time-lapse photograph.

mjg8

Visible Satellite Imagery

Visible Satellite Imagery

Prioritize...

At the completion of this section, you should be able to describe how a satellite constructs an image in the visible spectrum (describe what's being measured) and how to interpret visible satellite images. Specifically, you should also be able to describe when it is appropriate to use visible satellite imagery and when it is not, and discern the relative thickness of various cloud types. After completing the sections on infrared imagery, water vapor imagery, and radar imagery, you should also be able distinguish visible satellite imagery from these other types of images.

Read...

Perhaps you've heard a television weathercaster use the phrase "visible satellite image" before. Perhaps you also thought, "Of course it's visible if I can see it!" So, why make the distinction that a satellite image is "visible?" In short, visible satellite images make use of the visible portion of the electromagnetic spectrum. If you recall the absorptivity graphic (opens in a new window) that I introduced earlier, notice that from a little less than 0.4 microns to about 0.7 microns, there's very little absorption of radiation at these wavelengths by the atmosphere. In other words, the atmosphere transmits most of the sun's visible light all the way to the Earth's surface.

Along the way, of course, clouds can reflect (scatter) some of the visible light back toward space. Moreover, in cloudless regions, where transmitted sunlight reaches the Earth's surface, land, oceans, deserts, glaciers, etc. unequally reflect some of that visible light back toward space (with limited absorption along the way). You might say that visible light generally gets a free pass while it travels through the atmosphere.

An instrument on the satellite, called an imaging radiometer, measures the intensity (brightness) of the visible light scattered back to the satellite. I should note that, unlike our eyes, or even a standard camera, this radiometer is tuned to measure only very small wavelength intervals (called "bands"), so the instrument does not see all wavelengths of visible light. The shading of clouds, the Earth's surface (in cloudless areas) and other features, such as smoke from a large forest fire (opens in a new window), the plume of an erupting volcano (opens in a new window), or even chunks of ice floating on a lake (opens in a new window) can all be seen on a visible satellite image because of the sunlight they reflect.

What determines the brightness of the visible light reflected back to the satellite and thus the shading of objects on a visible satellite image? Well to start with, we need to have a some source of light. To see what I mean, check out this visible satellite loop of the United States (opens in a new window) spanning from roughly 10Z to 17Z on May 1, 2024. The United States is completely dark at the beginning because 10Z was still before sunrise, but gradually we start to see clouds appear on the image from east to west as the sun rose and the reflected sunlight reached the satellite. The bottom line is that standard visible satellite imagery is only useful during the local daytime because we are measuring the amount of sunlight being reflected from clouds and the surface. If there's no sunlight, there's no image.

Now, assuming that it's during the day, the brightness of the visible light reflected by an object back to the satellite largely depends on the object's albedo, which is simply the percentage of light striking an object which gets reflected. Since the nature of Earth's surface varies from place to place (paved streets, forests, farm fields, water, etc.), the surface's albedo varies from place to place.

A visible satellite image of Pennsylvania and surrounding states.

A visible satellite image from GOES-East on a mostly clear October day. Note that bodies of water, which have a very low albedo (about 8 percent) appear darkest on the image, while the appearance of the land surface varies depending on its albedo (forests have a lower albedo than vegetation / agricultural fields, etc.). Here's the full-sized annotated image (opens in a new window) for a closer look.
Credit: College of DuPage

For example, take a look at the visible satellite image showing Pennsylvania and surrounding states (above). For the full effect, I recommend opening the full-sized version of the image (opens in a new window) for a better look. This particular day was nearly cloudless over Pennsylvania, so it gives us a great opportunity to really see how albedo makes a difference in the appearance of an object on visible satellite imagery. The surface in Pennsylvania hardly looks uniform, and that's a result of differing albedos associated with different surfaces. For example, bare soil reflects back about 35 percent of the visible light that strikes it. Vegetation has an albedo around 15 percent. By the way, bodies of water, with a representative albedo of only 8 percent, typically appear darkest on visible satellite images. See how the labeled bodies of water all look darker than the land surfaces?

If you want another comparison point, check out the "true color" satellite view of Pennsylvania and surrounding states from Google (opens in a new window). Can you see how the heavily forested areas of northern Pennsylvania match up with the darker shaded areas I've highlighted above? Can you see how the largely agricultural valleys of southeastern Pennsylvania (with their higher albedo) appear a bit brighter on the image above? Of course, the brightest areas on the visible satellite image above correspond to clouds, which have a much higher albedo than the surface of the earth under most circumstances.

But, many different types of clouds exist, and they all have varying albedos, too! To see what I mean, let's perform an experiment. First, start with a tank of water (upper left in the photograph below). Now add a just tablespoon of milk (upper right), which increases the albedo a bit. By adding the milk, some of the radiation that is passing front-to-back through the tank is being scattered back towards the observer and the water-milk mixture takes on a whitish appearance. In frames #3 and #4 (lower-left and lower-right, respectively), we've added more milk. Now we see that the tiny globules of milk fat further increase albedo as more of the visible light is being scattered back toward the observer, while the transmission of light through the water-milk mixture decreases (that's why the word "SURFACE" is obscured).

A 4-panel photographic image that shows the scattering effect that diluted milk can have.

A series of images demonstrating the effect of scattering particles on albedo. The experiment starts with a tank of pure water (image 1). Next, milk is added in increasing amounts. Notice that as milk is added, albedo increases as more light is reflected back to the observer (and less light is transmitted through the water-milk mixture).
Credit: David Babb

Some key observations that you should note from this experiment:

  • It didn't take many globules of milk fat (1 tablespoon of milk in a 10-gallon fish tank) to begin noticeably decreasing transmission and increasing albedo.
  • A medium can very quickly become "optically thick" -- that is, nearly zero transmission and a high albedo (a large percentage of light is reflected back to the observer)
  • In frame #4, we had only added a total of three tablespoons of milk to the tank (so the tank is still mostly filled with water, yet the transmission of light through the tank is minimal and the albedo is fairly high. Even if we switched to a tank filled with pure milk, the albedo would only increase marginally (maybe another 20 or 30 percent).

This last point is true of clouds as well; once a cloud becomes "thick enough," additional growth will not change its albedo (and appearance on visible satellite imagery) appreciably. The bottom line is that thick clouds, like cumulonimbus (which are associated with showers and thunderstorms), are like tall glasses of milk in the sky; they contain lots of light-scattering water droplets and/or ice crystals. Meteorologists say that such clouds have a "high-water (or ice) content" and can have albedos as high as 90 percent, which causes them to appear bright white on visible satellite imagery.

More subdued clouds, such as fog and stratus (opens in a new window), typically have a lower water content, and, in the spirit of the glass of water with just a little milk, a lower albedo. Indeed, the albedo for thin (shallow) fog and stratus can be as low as 40 percent. So, as a general rule, fog and stratus often appear as a duller white appearance compared to thicker, brighter cumulus clouds. Here's an example of valley fog (opens in a new window) over Pennsylvania and New York for reference. Wispy, thin cirrus clouds have the lowest albedo (low ice content), averaging about 30 percent. They appear almost grayish compared to the bright white of thick cumulonimbus clouds outlined on the satellite image below.

A visible satellite image highlighting how cirrus can appear.

A visible satellite image showing a line of cumulonimbus (squall line) with cirrus blowing east off the tops of the storms.
Credit: NOAA

As a general caveat to our discussion about determining shading on visible satellite images, I point out that brightness also depends on sun angle. For example, the brightness of the visible light reflected back to the satellite near sunset is limited, given the low sun angle and the relatively high position of the satellite. To see what I mean, check out this loop of visible satellite images (opens in a new window) showing severe thunderstorms, which erupted over Oklahoma and Kansas. The tall, thick cumulonimbus clouds that developed appear bright white initially, but as sunset approaches, the appearance of the clouds darkens. If you look closely at the images later in the loop, you'll be able to see tall cumulonimbus clouds casting shadows to the east. Pretty cool, eh?

One more quick point about interpreting visible images. Clouds aren't the only objects that can have very high albedos; therefore, they're not the only objects that can appear whitish. Indeed, cloudless, snow-covered regions can have albedos as high as 80 percent, and they also appear bright white on visible imagery. To see how to tell the difference between clouds and snow cover on standard visible imagery, check out the Case Study below, after reviewing the following summary highlighting the important characteristics of visible satellite imagery:

Visible satellite imagery...

  • is based on the albedo of objects (the fraction of incoming sunlight that is reflected to the satellite).
  • can tell you about the thickness of clouds (thicker clouds have higher albedos and appear brighter than thinner clouds, which have lower albedos), but only general inferences can sometimes be made about a cloud's altitude
  • can be used to distinguish between snow cover and clouds, given that surface features such as lakes and rivers can be observed (see Case Study below)
  • is not able to detect clouds (or anything else) during the satellite's local night (visible imagery requires sunlight).
  • is not useful for determining whether precipitation is present under the observed clouds.

Case Study...

Snow Cover or Clouds?

Since snow cover and clouds can have very similar albedos, distinguishing between them on visible satellite imagery can sometimes be tricky. Check out the short video below (3:04), which demonstrates some ways to tell the difference.

PRESENTER: Both clouds and snow cover have a high albedo, and can appear in similar shades of white on visible satellite imagery, so let’s go over some ways to distinguish between the two. For starters, regions of snow cover often reveal details of the local terrain, which appear somewhat darker.

On this visible satellite image, we can see this swath of white shading from Ohio through northern Pennsylvania and into New York and New England to the north of this line, but the fact that we can pick out some surface features indicates that this is snow cover, not cloud cover. We can see the unfrozen Finger Lakes in New York, which have a much lower albedo since snow did not accumulate on the water. Lakes Erie and Ontario were largely unfrozen, too, and that gives a nice contrast between the low albedo of the water, which appears dark, next to the higher albedo of the snow cover on the ground, which appears brighter.

We can also pick out heavily forested regions because deciduous and coniferous forests also appear dark on visible imagery. Regions with dense forests mask the high albedo of the underlying snowpack because trees often lose the snow that accumulates on their limbs fairly quickly, so the satellite sees the canopy of trees instead of the snowpack on the ground. The heavily forested Adirondack Mountains in New York really stick out, as do some forested areas in northern Pennsylvania. Farther to the west into northeastern Ohio, the more agricultural landscape appears brighter because there are fewer trees and the satellite sees the high-albedo snowpack better.

Of course, if you have a loop of visible satellite images, distinguishing snow cover from clouds is even easier because snow cover doesn’t move, but clouds do. If we look at this loop which spans from about 14Z to 1630Z, you can see clouds streaming over Ohio and Michigan into western Pennsylvania and New York. The leading edge of this cloud cover looks pretty wispy and not very bright, and we can still make out some of the snow cover beneath it, suggesting that these are thin cirrus clouds. If you look closely, you can even see some linear features within the cirrus, indicative of airplane contrails. The clouds entering the left side at the end of the loop into northwest Ohio appear brighter and have a higher albedo, indicating that they are thicker than the cirrus streaming ahead of them.

Visible satellite imagery is a great tool for discerning cloud thickness, and identifying areas of snow cover when clouds aren’t too prevalent. I hope this video helps you with your interpretations of visible satellite imagery.

Ultimately, by carefully studying the visual cues of terrain features or watching the movement of clouds on a loop, you can usually successfully discern clouds from snow cover on visible imagery. But, by utilizing more wavelengths of the electromagnetic spectrum, we can really change the "look" of clouds and snow cover. For more details, along with a list of useful resources for accessing satellite images, check out the Explore Further section below.

Explore Further...

Key Data Resources

Studying satellite images should be an integral part of any forecaster's daily routine, so if you're interested in starting to explore satellite images online, I recommend the resources below. Just keep in mind that you'll encounter a lot of different types of satellite images on these pages. We'll learn about some of them soon. Others are beyond the scope of this course, but you're welcome to investigate on your own!

Snow Cover and Clouds on Multi-Channel Imagery

When information collected at multiple wavelengths of the electromagnetic spectrum is combined into a single image (a "multi-channel" or "multi-spectral" approach), forecasters can sometimes gain more insight than they can by looking at a satellite image created using a single wavelength. The short video below (2:20) shows an example of using three wavelengths to more easily discern clouds from snow cover. If you're interested in learning more about this satellite product, check out this "Quick Guide (opens in a new window)" detailing how it's created and how to interpret it.

PRESENTER: Discerning between high-albedo surfaces like clouds and snow cover can sometimes be tricky with standard visible imagery. We’re left to track the movement of clouds on loops or identify snow cover by picking out surface features with lower albedo like unfrozen bodies of water or heavily forested areas.

Now let’s take a different, more colorful look at this loop. This loop was created by expanding beyond just the visible portion of the electromagnetic spectrum. This particular satellite product is created by combining data collected at 3 different wavelengths – one in the visible portion of the spectrum, one just outside the visible portion in the near-infrared, and one in the infrared. By assigning different colors to the information gathered at each wavelength, snow cover and clouds appear differently, which makes it easier to discern between them.

On this particular image, the visible channel is detecting the albedo of various objects, but instead of white, it’s displayed in a green shading. The near-infrared channel is shaded blue and is useful for distinguishing clouds composed mainly of liquid drops from those composed of ice crystals. Finally, the infrared channel is shaded red, and relates to temperature of the object being detected.

When combining all of this information into one image, areas of snow coverage show up in green, while clouds tend to show up in various other shades, depending on how cold their tops are and whether they’re composed mainly of liquid or ice. These cirrus clouds advancing into Pennsylvania and West Virginia from the west appear sort of pinkish because they’re very high and cold, and are composed of ice crystals. Meanwhile, most of these clouds out over the Atlantic are cyan colored because they are lower and composed of liquid drops.

While exact shadings can vary based on several factors, using multiple wavelengths can give us more insights than just using one channel, and this type of imagery has a number of applications in addition to just distinguishing between snow cover and clouds. It can be useful for studying growing cumulus clouds as they become increasingly composed of ice, and can be used to track heavy snow squalls in areas that have poor radar coverage, among other things.

mjg8

Infrared Satellite Imagery

Infrared Satellite Imagery

Prioritize...

After reading this section, you should be able to describe what is displayed on infrared satellite imagery, and describe the connection between cloud-top temperature retrieved by satellite and cloud-top height. You should also be able to discuss the key assumption about vertical temperature variation in the atmosphere that meteorologists make when interpreting infrared imagery. Finally, it is important that you be able to differentiate an IR image from visible, water vapor, and radar imagery. This skill involves knowing what clues distinguish one type of imagery from another.

Read...

Visible satellite imagery is of great use to meteorologists, and for the most part, its interpretation is fairly intuitive. After all, the interpretation of visible imagery somewhat mimics what human eyes would see if they had a personal view of the earth from space. But, visible satellite imagery also has its limitations: It's not very useful at night, and it only tells us about how thick (or thin) clouds are.

By limiting our "vision" only to the visible part of the spectrum, we diminish our ability to describe the atmosphere accurately. Consider the images below. The image on the left shows a photo (which uses the visible portion of the spectrum) of a man holding a black plastic trash bag. On the right is an infrared (IR) image of that same man. Notice that switching to infrared radiation gives us more information (we can see his hands) than we had just using visible light. Furthermore, the fact that the shading in the infrared image is very different from the visible image suggests that perhaps we can gain different information from this new "look."

Two photos of a man, one using visible light, and one using infrared emissions.

Looking at the same image in both the visible and infrared portions of the electromagnetic spectrum provides insights that a single image cannot. Likewise with remote sensing of the atmosphere. By gathering data at multiple wavelengths, we gain a more complete picture of the state of the atmosphere.
Credit: NASA/JPL-Caltech/R. Hurt (SSC)

Before we delve into what we can learn from infrared satellite imagery, we need to discuss what an infrared satellite image is actually displaying. Just like visible images, infrared images are captured by a radiometer tuned to a specific wavelength. Returning to our atmospheric absorption chart (opens in a new window), we see that between roughly 10 microns and 13 microns, there's very little absorption of infrared radiation by the atmosphere. In other words, infrared radiation at these wavelengths emitted by the earth's surface, or by other objects like clouds, gets transmitted to the satellite with very little absorption along the way.

You may recall from our previous lesson on radiation that the amount of radiation an object emits is tied to its temperature. Warmer objects emit more radiation than colder objects. So, using the mathematics behind the laws of radiation (namely Kirchhoff's Law and Planck's Law), computers can convert the amount of infrared radiation received by the satellite to a temperature (formally called a "brightness temperature" even though it has nothing to do with how bright an object looks to human eyes). Finally, these temperatures are converted to a shade of gray or white (or a color, as you're about to see), to create an infrared satellite image. Conventionally, lower temperatures (colder objects) are represented by brighter shades of gray and white, while higher temperatures (warmer objects) are represented by darker shades of gray.

One challenge of working with infrared images is that they can "look" very different, even if they're displaying the exact same data. Some infrared images use grayscale so that they resemble visible images (like the first example in the slideshow below), while others include all the colors of the rainbow! Infrared images that contain different color schemes are usually called enhanced infrared images, not because they are "better," but because the color scheme highlights a particular feature on the image (usually very low temperatures). Click through the slideshow below to see a few examples. All four images in the slideshow display the exact same data; there's really no fundamental difference between a "regular" (grayscale) infrared image and an enhanced infrared image even though different color schemes change the look of the image. The key with any IR image is to locate the temperature-color scale (opens in a new window) (usually along the top, side, or bottom of the image) and match the shading to whatever feature you're looking at.

Four corresponding infrared satellite images with differing color schemes. The "traditional" infrared image is shown first. Toggle through the other images to see various "enhanced" infrared images which contain colors that mark certain key temperature ranges (in this case very low temperatures).
Credit: University of Wisconsin / SSEC

So, we know that an infrared radiometer aboard a satellite measures the intensity of radiation and converts it to a temperature, but what temperature are we measuring? Well, because atmospheric gases don't absorb much radiation between about 10 microns and 13 microns, infrared radiation at these wavelengths mostly gets a "free pass" through the clear air. This means that for a cloudless sky, we are simply seeing the temperature of the earth's surface. To see what I mean, check out this loop of infrared images of the Sahara Desert (opens in a new window) in northern Africa. Note the very dramatic changes in ground temperatures from night (light gray ground) to day (black ground) and back to night again. This is because dramatic diurnal changes in ground temperatures (opens in a new window) often occur over the deserts, where the broiling sun bakes the earth's surface by day. At night, however, the desert floor often cools off rapidly after sunset.

Of course, sometimes clouds block the satellite's view of the surface; so what's being displayed in cloudy areas? Well, while atmospheric gases absorb very little infrared radiation at these wavelengths (and thus emit very little by Kirchhoff's Law), that's not the case for liquid water and ice, which emit very efficiently at these wavelengths. Therefore, any clouds that are in the view of the satellite will be emitting infrared radiation consistent with their temperatures. Furthermore, infrared radiation emitted by the earth's surface is completely absorbed by the clouds above it (opens in a new window). So, even though there is plenty of IR radiation coming from below the cloud and even from within the cloud itself, the only radiation that reaches the satellite is from the cloud top. Therefore, IR imagery is the display of either cloud-top temperatures or the Earth's surface temperature (if no clouds are present).

A lush field with a snow-capped mountain in the background.

The backdrop of snow-capped Mauna Kea (which means "White Mountain" in the Hawaiian language) against the lush, grazing grass removes any doubt about the validity of the observation that temperature usually decreases with increasing altitude.
Credit: Karyl-Ann Ah Hee

So, infrared imagery can tell us the temperature of the cloud tops, but how is that useful? Well, if we make the simple assumption that temperature decreases with increasing height in the lower atmosphere (that is, the troposphere), then we can equate cloud-top temperature to cloud-top heights. In other words, clouds with very cold tops have high-altitude cloud tops (for example: cirrostratus, cirrocumulus, cumulonimbus). Clouds (such as stratus, stratocumulus, or cumulus) with warmer tops have tops that reside at a low altitude.

Given that infrared imagery can tell us about the altitude of cloud tops, and visible imagery can tell us about the thickness of clouds, meteorologists use both types of images in tandem. Using them together makes for a powerful combination that helps to specifically identify types of clouds. Let's apply this quick summary to a real case so I can drive home this point using the short video below (2:39).

PRESENTER: Let’s use these side-by-side visible and infrared images to see how weather forecasters use both types of images to diagnose cloud types. Even though these images look pretty similar at first glance, they’re displaying very different things. Visible satellite imagery is most like what we see with our eyes. It’s based on the amount of visible light that gets reflected back to the satellite. But, it’s critical to realize that infrared imagery is different. It’s showing us temperature, either of cloud tops or the earth’s surface. Note that even though no temperature scale is shown on the infrared image, brighter shades of gray and white correspond to lower temperatures, as is typically the case.

Let’s start by looking at Point A, which is located in the line of bright white clouds extending from the Outer Banks of North Carolina down into Florida. Their brightness on visible imagery indicates that these are thick clouds. These clouds also appear bright on infrared imagery, so they have cold tops, indicating that the tops are high in the troposphere. Thus, given that these clouds are thick and have cold tops, we can assume that they are cumulonimbus, which can have tops reaching altitudes upwards of 60,000 feet.

Now let’s look at Point B, located in the area of "feathery" clouds over the Atlantic. Obviously, these feathery clouds are not as bright as the area of cumulonimbus on visible imagery, which means the clouds at Point B are much thinner. On the infrared image, these thin clouds appear bright white, meaning that they have cold tops, which are high in the troposphere. Therefore, they must be cirrus clouds, which are high and thin. I should add the caveat that sometimes when clouds have very thin spots, infrared radiation from the earth's surface can leak through holes in the clouds and reach the satellite. That bit of extra radiation from the warm earth can make the tops of very thin clouds appear a little warmer and lower than they really are.

Finally, let’s turn our attention to Point C, which is located in the region of clouds over the Great Lakes and upper Ohio Valley. The darker grayish appearance on infrared imagery tells us that they're low clouds with warm tops. These clouds are fairly bright on the visible image, meaning that they must be moderately thick. Given the somewhat "cellular" nature and breaks in between blobs of clouds, these are likely stratocumulus clouds, although farther north in the Great Lakes there's likely a more solid deck of stratus.

The lesson learned here is that both visible and infrared imagery can be used together to identify cloud types during the daytime.

While both visible and infrared imagery can be used together to identify cloud types during the daytime, at night, routine visible imagery is not feasible, so weather forecasters must rely almost exclusively on infrared imagery. Though infrared imagery is indispensable at night, it has some drawbacks. Detecting nighttime low clouds and fog can be tantamount to impossible because the radiating temperatures of the tops of low clouds and fog are often nearly the same as nearby ground where stratus clouds haven't formed.

The Challenges of Infrared Images

To learn more about the shortcomings of IR images at night and to review what you've already learned in this section check out this short video (2:22) showing an infrared satellite simulator (opens in a new window) (video transcript (opens in a new window)). As the video demonstrates, in cases where our assumption about temperatures decreasing with increasing height breaks down, the appearance of infrared images might not be what we expect. By the way, I encourage you to give the infrared imagery simulator (opens in a new window) a try for yourself. I suggest trying a few different hypothetical situations as in the video to see how they might look on infrared imagery, which can help you see what factors can affect the appearance of infrared satellite images.

One of the scenarios shown in the video is something that you might encounter at night or early in the morning: The ground in cloud-free areas can sometimes actually be colder than the tops of nearby low clouds, and it can cause IR images to look a bit strange. Take a look at the image below, collected at 1315Z on a February morning. Keep in mind that 1315Z is 7:15 AM Central Time in February (right around sunrise). Focus your attention on the slightly darker patch that's circled. Given that it's darker (and warmer), we must be looking at bare ground, right? Now toggle the slideshow to the visible image from about one hour later (when there was enough sunlight for a visible image).

An infrared satellite image collected at 1315Z on a late February day. The dark patch over northern Texas and Oklahoma (circled on the IR image) represents low clouds and fog, as is evident from the visible image from one hour later (toggle the slideshow to see the visible image). The surrounding lighter areas on the infrared image are characteristic of ground which has cooled to below the temperature of the low cloud tops.
Credit: NCAR

The visible image shows a bank of low clouds and fog where the darker shading was located on the infrared image. So, why did those low clouds and fog appear darker than their surroundings on the infrared image? Their tops were actually warmer than the surrounding bare ground in areas with clear skies. The map of regional station models from 1343Z (opens in a new window) shows that it was very chilly in the area of the Texas and Oklahoma panhandles where skies were clear. In other words, this situation violated our assumption that temperatures decrease with increasing height in the troposphere. We'll explore the reasons why these exceptions exist later in the course, but ground temperatures overnight in the cold season are often colder than overlying air. The time of the infrared image is 1315Z (right around sunrise), which is near the time when ground temperatures are often at their lowest (and it's most likely for surrounding ground to be colder than nearby cloud low cloud tops).

On the other hand, it can also be easy to assume that colors equating to low temperatures must mean we're looking at high, cold cloud tops. While that's usually the case, take a look at this enhanced infrared image from 13Z on December 23, 2022 (opens in a new window). The entire northern United States is awash in colors indicating temperatures of -20 degrees Celsius (-4 degrees Fahrenheit) or lower. So, do all the colors represent high, cold cloud tops? Nope! You're looking at very cold ground in much of the north-central U.S. and into the Midwest. The clue that the colored area isn't all clouds is that we can see surface features (opens in a new window) -- the unfrozen Missouri and Illinois Rivers appears warmer than surroundings, as do several cities, such as Madison, Wisconsin. An outbreak of frigid air caused the ground to be so cold that it met the threshold to be colorized on this particular image!

The bottom line here is that you have to be careful when examining IR imagery, especially in cases where you're dealing with low clouds and/or the ground is very cold. While the assumption that temperatures decrease with increasing height in the troposphere is usually correct, exceptions do exist! Just remember that you are looking at temperatures and that lighter gray or coloring doesn't always mean cloudy skies. There are methods for detecting low clouds, which involve subtracting data collected at different IR wavelengths to extract only the low cloud field (if you're interested in seeing an example, check out the Explore Further section below).

This concludes our discussion of infrared satellite imagery. Now it's time to tackle water vapor imagery. But first, review the key points from this section.

Infrared satellite imagery...

  • is based on the fact that measuring an object's infrared emission tells you something about its temperature.
  • displays the temperature of either cloud tops or the earth's surface (if the sky is clear).
  • can be combined with the assumption that temperature decreases with increasing height to allow cloud-top heights to be determined. Lower temperatures typically mean higher cloud tops.
  • is not able to give any direct indication of cloud thickness or the presence of precipitation (although inferences can be made in some cases).
  • should not be confused with radar imagery. Inexperienced forecasters sometimes confuse enhanced infrared satellite images (opens in a new window) with similarly colored radar images (opens in a new window). If you are uncertain, look at the color key (an infrared image will always have units of temperature).

Explore Further...

As you learned in this section, one of infrared imagery's main advantages is that it's useful at night, but one of the challenges of interpreting IR images at night is that the tops of low clouds or fog can sometimes have similar temperatures as the surface of the earth in surrounding areas where it's not cloudy. In these situations, it can be difficult or impossible to pick out the areas of low clouds or fog with conventional infrared imagery, but subtracting data at different infrared wavelengths can be help us with this problem. For an example, check out the short video below (2:30). If you're interested in learning more about the satellite product featured in this video, called the "Nighttime Microphysics RGB," check out this quick guide (opens in a new window).

PRESENTER: Detection of low clouds and fog using infrared imagery can sometimes be tricky at night and early in the morning because one of the main assumptions that forecasters use when interpreting infrared images – that temperatures decrease with increasing height – isn’t always true.

Take this enhanced infrared image as an example. Assuming that temperatures decrease with increasing height might lead us to believe that this dark area has clear skies, meaning that the satellite is seeing emissions from the relatively warm ground, while the lighter shaded areas, which are colder, represent cloud cover.

But, that’s not the case at all. The brighter gray shaded areas actually have clear skies, and they appear colder on this enhanced infrared image because the ground is colder than the tops of the low clouds and fog in this area. For the record, these very brightly colored areas, actually do represent very cold cloud tops which are high in the troposphere.

Difficulty in discerning between low clouds or fog and clear skies on enhanced infrared imagery at night or early in the morning isn’t all that uncommon because the tops of low clouds can be warmer or have similar temperatures to the ground in surrounding areas with clear skies.

But, using multiple wavelengths of the electromagnetic spectrum gives forecasters another tool for more easily identifying low clouds or fog at night. This image was created by using multiple wavelengths from the infrared portion of the electromagnetic spectrum, differencing their contributions in order to better identify cloud thickness, composition, and temperature, and then applying different colors. Using this approach causes low clouds and fog to appear much more intuitively – we can see the area of low clouds across southeast Texas over into Louisiana and Arkansas in this whitish tan shading. The really high clouds to the northwest here now appear very dark, while the slice of cold ground in between appears pink.

Finally, once the sun rose on this particular day, traditional visible imagery confirmed our interpretation of the multi-channel approach – with a thick area of low clouds and fog, surrounded by clear skies. So, the multi-channel approach at night really made the interpretation of low clouds and fog much more intuitive compared to traditional infrared imagery.

mjg8

Water Vapor Imagery

Water Vapor Imagery

Prioritize...

Water vapor imagery can be a challenging topic! At the completion of this section, you should be able to...

  • describe what is displayed on water vapor satellite imagery and correctly interpret water vapor images.
  • explain the difference between using a wavelengths between roughly 6 and 7 microns versus 10-13 microns.
  • explain what is meant by the term "effective layer" and discuss the implications of a warm versus cold effective layer.
  • explain what information is not obtainable from a water vapor image and what features are almost never observed on such images.

As with the other sections on satellite imagery, it is important that you be able to differentiate a water vapor image from visible, traditional IR, and radar imagery. You should be able to point to certain clues that tell you that you are looking at a water vapor image and not one of the other types.

Read...

Our look at visible and infrared imagery has hopefully shown you that using a variety of wavelengths in remote sensing is helpful because this approach gives us a more complete picture of the state of the atmosphere. Meteorologists can use visible and infrared imagery to look at the structure and movement of clouds because these types of images are created using wavelengths at which the atmosphere absorbs very little radiation (so radiation reflected or emitted from clouds passes through the clear air to the satellite without much absorption). Now, what if we took the opposite approach? What if we looked at a portion of the infrared spectrum where atmospheric gases (namely water vapor) absorbed nearly all of the terrestrial radiation? Water vapor imagery uses this exact approach.

In case you didn't catch it in the paragraph above, let me be clear: Water vapor imagery is another form of infrared imagery, but instead of using wavelengths that pass through the atmosphere with little absorption (like traditional infrared imagery, which utilizes wavelengths between roughly 10 and 13 microns), water vapor imagery makes use of slightly shorter wavelengths between about 6 and 7 microns. As you can tell from our familiar atmospheric absorption chart (opens in a new window), these wavelengths are mostly absorbed by the atmosphere, and by water vapor in particular. Therefore, water vapor strongly emits at these wavelengths as well (according to Kirchoff's Law). Thus, even though water vapor is an invisible gas at visible wavelengths (our eyes can't see it) and at longer infrared wavelengths, the fact that it emits so readily between roughly 6 and 7 microns means the radiometer aboard the satellite can "see" it.

This fact makes the interpretation of water vapor imagery different than traditional infrared imagery (which is mainly used to identify and track clouds). Unlike clouds, water vapor is everywhere. Therefore, you will very rarely see the surface of the earth in a water vapor image (except perhaps during a very dry, very cold Arctic outbreak). Secondly, water vapor doesn't often have a hard upper boundary (like cloud tops). Water vapor is most highly concentrated in the lower atmosphere (due to gravity and proximity to source regions like large bodies of water), but the concentration tapers off at higher altitudes.

The fact that water vapor readily absorbs radiation between roughly 6 and 7 microns also raises an interesting question: Just where does the radiation that ultimately reaches the satellite originate from? The answer to that question is the effective layer, which is the highest altitude where there's appreciable water vapor. Above the effective layer, there is not enough water vapor to absorb the radiation emitted from below, nor is there enough emission of infrared radiation to be detected by the satellite. Any radiation emitted below the effective layer is simply absorbed by the water vapor above it.

In our previous discussion of traditional infrared imagery, I'm not sure if you realized that the radiation detected by the satellite only came from one distinct level in the atmosphere at a given point. If the column was clear, then the surface was detected; however, if the column contained clouds, then only the top-most layer of clouds was observed. The surfaces that emit the radiation that the satellite "sees" (highest cloud tops or the ground in the case of traditional IR imagery) are the "effective layers." A universal property of an effective layer is that only emissions from this layer are observed by the satellite. For a visual, consider emissions at a representative wavelength useful for traditional infrared imagery (10.7 microns, for example) from a cloudy atmospheric column (toward the left on the schematic below).

Schematic comparing traditional IR imagery to water vapor imagery.

At traditional infrared wavelengths (like 10.7 microns), the satellite either sees radiation from the ground or the tops of clouds (left). The level from which the satellite observation is derived is called the effective layer. For water vapor imagery (right), the effective layer is defined as the highest level of appreciable water vapor whose radiation can be detected by the satellite. As with traditional IR imagery, all radiation emitted below the effective layer is absorbed and does not reach the satellite.
Credit: David Babb

In the column with clouds, radiation emitted from the top of the cloud reaches the satellite because no appreciable liquid water or ice exists above the cloud, giving the radiation a "free pass" to the satellite. Below the observed cloud layer (that is, the effective layer), any emissions from liquid water and ice are absorbed by the cloud layer that lies above them. Of course, if the air column is free of clouds, then the ground is the effective layer at longer infrared wavelengths, because the emissions that the satellite radiometer sees are coming from the ground (column farthest to the left in the graphic above).

Now let's carry this idea over to water vapor imagery (refer to the right portion of the above schematic). At the wavelengths used for water vapor imagery (between roughly 6 and 7 microns), water vapor very effectively absorbs and emits radiation. Another way to think about it is that at a wavelength like 6.7 microns (the sample wavelength used in the schematic), water vapor radiates just like liquid water and ice do at 10.7 microns. So, water vapor is an invisible gas at visible wavelengths and longer infrared wavelengths, but it "glows" at wavelengths around 6 to 7 microns.

The bottom line is that, the effective layer is the source region for the radiation detected by the satellite. It's the highest layer of appreciable water vapor, and above the effective layer, there is not enough water vapor to generate a signal to be observed by the satellite. And as with clouds in the traditional IR example, any radiation emitted below the effective layer is simply absorbed by the water vapor above it. Therefore, the satellite measures the radiation coming only from the effective layer, and like traditional infrared imagery, this radiation intensity is converted to a temperature, which means water vapor imagery displays the temperature of the effective layer of water vapor, although not all images you'll find online will contain a specific color temperature scale. Commonly, water vapor imagery uses shades of gray, with warmer (lower) effective layers shown as dark and colder (higher) effective layers shown in white. Many sites will add color enhancements to identify key temperatures like with traditional infrared imagery, but color schemes vary from website to website.

You may hear on television or see other online explanations that suggest water vapor imagery measures the water vapor content of the atmosphere, but that's not really true. We can infer certain things about the moisture profile of the atmosphere based on the temperature of the effective layer, but the satellite isn't actually measuring the amount of water vapor present in order to create water vapor images, and it tells us nothing about water vapor below the effective layer. So, what can we infer by knowing the temperature of the effective layer? Check out the short video (2:43) below:

PRESENTER: We have here a color-enhanced water vapor image, and we’re going to see how to interpret this image. First, let’s get our bearings with the color scale along the bottom. Lower temperatures are color coded in pinks, blues, greens, and purples. Meanwhile, higher temperatures are either in shades of gray or in orange or red for the highest temperatures on this particular image – color schemes can vary, though from website to website.

If we make the same assumption we did with traditional infrared imagery – that temperature decreases with increasing height in the troposphere, then we can make meaning out of these temperatures. Basically, a colder effective layer means the effective layer is higher in the troposphere, and if we know the height of the effective layer, we can infer the depth of the dry air above it. With water vapor imagery, we can’t assume anything about what lies below the effective layer because all of the emissions from below are being absorbed by the effective layer.

So, let’s start with one of the warmer effective layers on this map – over eastern Texas in the dark gray shading. Our color scale tells us that the temperature of the effective layer is approaching -20 degrees Celsius. Using another tool, I looked up the temperature profile in this region at the time, and this temperature corresponded to a height a little above 20,000 feet, which is in the middle part of the troposphere. So, we can infer that the upper troposphere was dry here because all the meaningful water vapor was roughly 20,000 feet and below.

Now let’s pick a point here in eastern Kansas, where there’s more of a grayish white shading, which corresponds to about -35 degrees Celsius. Again, looking up the temperature profile, this temperature corresponded to a height of almost 30,000 feet, which is the upper troposphere, so we can conclude that there was more water vapor in the upper troposphere over eastern Kansas than there was over east Texas.

This area near the Kansas / Nebraska border has some of the lowest temperatures on the map – a very cold effective layer of around -60 degrees Celsius. On this date, that temperature was up near 40,000 feet, at the very top of the troposphere. Such a cold, high effective layer can only be caused by high ice clouds typical of the tops of cumulonimbus clouds. I should point out that at such low temperatures very little water exists in the vapor phase. However, ice crystals also have a fairly strong emission signature between 6 and 7 microns, so if you see such cold effective layers (say less than about -45 degrees Celsius or so), you are most likely looking at ice clouds (like cirrus, cirrostratus, or cumulonimbus tops) rather than at just water vapor. And, in fact in this case, this was an area of budding thunderstorms.

Credit: Penn State

In the video, did you notice that the highest effective layer we observed was at the top of the troposphere, near 40,000 feet, and was actually most likely emissions from ice crystals (ice crystals also emit very effectively between 6 and 7 microns) in the tops of cumulonimbus clouds? Meanwhile, the lowest effective layer that we observed was near 20,000 feet? That's not uncommon. Because emissions from water vapor near the earth's surface are absorbed by water vapor higher up, it's often impossible to detect features at very low altitudes. In other words, low clouds (stratus, stratocumulus, nimbostratus, and fair weather cumulus) are rarely observable on water vapor imagery.

To see what I mean, check out the pair of satellite images below (infrared on the left, water vapor on the right). The yellow dot represents Corpus Christi, Texas, which was shrouded in low clouds (gray shading on the infrared image -- check out the meteogram for Corpus Christi (opens in a new window)). Now examine the water vapor image. This image uses traditional grayscale, so the dark shading on the water vapor image indicates a warm effective layer located in the middle troposphere. However, we can't see even a hint of low clouds! In this case, the effective layer (located above the low clouds) absorbed all of the radiation emitted from below, rendering the low clouds undetectable on the water vapor image. For another example of low clouds not appearing on water vapor imagery, check out the Case Study section below.

A comparison of water vapor and IR images for a location along the Texas coast.

An infrared image (left) shows a blob of low clouds (in gray) over the western Gulf of Mexico and the Texas Seaboard. But there are seemingly no clouds evident in the water vapor image (right). The dark shading on the water vapor image indicates that the effective layer lies in the mid-troposphere (above the low clouds); therefore, radiation emitted by liquid water and water vapor in the tops of the low clouds was absorbed by water vapor higher up and never reached the satellite.
Credit: NOAA

How Low Can Water Vapor Imagery Go?

If you look back carefully at our familiar atmospheric absorption spectrum (opens in a new window), notice that absorption (and therefore emission) by water vapor isn't uniform in the range of wavelengths used for water vapor images (roughly 6 to 7 microns). Indeed, toward the higher end of the range, absorption is less than 100 percent, and using the different absorption and emission properties of water vapor near 7 microns allows satellites to "see" effective layers in different layers of the troposphere. Therefore, you'll sometimes find water vapor images labeled "upper-level", "mid-level" or "lower-level." While the altitude of the effective layer on any of these images varies based on the amount of water vapor in an air column (and how it's distributed), make sure that you're not fooled by these names. Even "lower-level" water vapor imagery typically detects effective layers between roughly 7,500 feet and 18,000 feet. In other words, most often, you're looking at emissions from effective layers of water vapor in the middle troposphere, even on so-called "lower-level" water vapor imagery.

Therefore, even "lower-level" water vapor imagery still can't often detect surface water vapor or the presence of low clouds. For example, check out this side-by-side comparison of a visible image and lower-level water vapor image (opens in a new window). On this water vapor image, shades of yellow and orange mark regions with a warmer effective layer. Note that the lower-level water vapor image provides no indication of the presence of low clouds whatsoever (especially notable over Illinois and Indiana), because their tops were located below the effective layer at this time (their emissions were absorbed by water vapor higher up). The bottom line is that even on "lower-level" water vapor images, you cannot see near-surface water vapor, fog, or low clouds, unless the atmospheric is extremely dry higher up (which is only possible in very cold, dry Arctic air).

Smoke streaming away from an extinguished candle.

Much like smoke from an extinguished candle, water vapor imagery helps forecasters trace mid- or upper-level winds.

Now that we've discussed how to interpret water vapor imagery, what might we use it for? Forecasters most often use water vapor imagery to visualize upper-level circulations in the absence of clouds. This is because water vapor is transported horizontally by high-altitude winds and thus can act like a tracer, much like smoke from an extinguished candle (as in the photo on the right). Consider this enhanced IR satellite loop (opens in a new window) and focus your attention on the Southwest. Since there are no clouds present, we can't really tell how the air is moving over this region. Now, check out the corresponding loop of water vapor images (opens in a new window) and focus your attention on the same area. What do you see? Do you notice the ever-so-slight counter-clockwise circulation of the air off the California coast? Such upper-level circulations are in fact important, as we will learn later in this course. The lesson learned here is that we were able to identify this circulation only with the aid of water vapor imagery.

Water vapor imagery's ability to trace upper-level winds ultimately allows forecasters to visualize upper-level winds, and computers can use water vapor imagery to approximate the entire upper-level wind field. Here's an example of such "satellite-derived winds (opens in a new window)" in the middle and upper atmosphere at 12Z on September 28, 2022 (toward the left side of the image, you can see Hurricane Ian about to make landfall in Florida). Having such observations over the data-sparse oceans is extremely valuable to forecasters, and much of this information gets put into computer models so that they better simulate the initial state of the atmosphere, which leads to better forecasts than if we didn't have these observations.

This concludes our look at the three most common types of satellite imagery. Before moving on to radar imagery, take a moment to review the key points about water vapor imagery as well as the Case Study below.

Water Vapor satellite imagery...

  • uses infrared radiation; except unlike traditional infrared imagery, it uses wavelengths at which water vapor strongly emits and absorbs infrared radiation.
  • displays the temperature of the effective layer of water vapor. Warm effective layers mean that upper troposphere and possibly parts of the middle troposphere are "dry" (they contain very little water vapor). By comparison, colder effective layers indicate a higher concentration of water vapor and/or ice clouds in the upper troposphere.
  • is not able to give any measure of the atmospheric water vapor content below the effective layer.
  • usually does not show the presence of low clouds or water vapor near the surface. These almost always lie below the effective layer.
  • is used to trace air motions in the middle and upper troposphere, even in areas with no clouds.

Note that you may find water vapor images that lack a color temperature scale, or may use a color scale with general references to moist and dry. (opens in a new window) These references typically apply to the upper troposphere since the "dry" areas have a lower (warmer) effective layer that resides somewhere in the middle troposphere.

Case Study...

You saw some cases above showing that water vapor imagery typically does not show the presence of low clouds or water vapor near the surface. Check out the short video below (2:03) for another example -- this time in an extremely moist low-level environment.

PRESENTER: It’s important to remember that water vapor imagery very rarely gives us insights about surface or near-surface moisture. For example, check out this water vapor image of North America and the western Atlantic Ocean in the image on the left, and focus in on the Caribbean Sea. Note the general dark shading in the region, indicating a relatively warm effective layer and a dry upper atmosphere. The zoomed in version on the right focusing on Puerto Rico, Hispaniola, and much of the Caribbean Sea gives us a better look at exactly where the dark shading is located. It certainly includes Puerto Rico and Hispaniola.

But, don’t let the dark shading cause you to conclude that the entire air column is dry. Adding surface station models to the water vapor image shows surface dew points of 72 degrees at these stations in the Dominican Republic and Puerto Rico. So, concentrations of water vapor near the surface are quite high – the low-level air mass is moist, but you would never know it from the appearance of the water vapor image because radiation from the large amounts of water vapor near the surface is absorbed by water vapor higher up in the middle regions of the atmosphere.

Furthermore, the station models indicate varying degrees of partly cloudy skies. The clouds that were present were fair-weather cumulus clouds – shallow puffy clouds that often dot the tropical sky. They usually have tops that are only several thousand feet above the ground, and radiation from the tops of these clouds was being absorbed by water vapor above, which cloaks these low-topped clouds from the satellite radiometer’s view.

Rare exceptions do occur, when water vapor from the lower troposphere does appear on water vapor images. That can sometimes occur when columns of air are extremely dry, and there’s not enough water vapor in the middle or upper troposphere to absorb emissions from water vapor near the surface or from the tops of low clouds, but typically indications of water vapor near the surface or low topped clouds do not appear on water vapor images.

mjg8

Radar, Part 1: How Radar Works

Radar, Part 1: How Radar Works

Prioritize...

After reading this section, you should be able to describe how a radar works and what portion of the electromagnetic spectrum that modern radars use. You should also be able to define the term "reflectivity" as well as its units. Furthermore, you should be able to explain how a radar locates a particular signal and describe concepts such as beam elevation and ground clutter. Finally, after completing the other sections detailing the various types of satellite imagery, you should be able to distinguish between radar imagery and satellite imagery (especially similarly-colored infrared images).

Read...

The ancestry for modern radar can be traced all the way back to the late 1800s and German physicist Heinrich Hertz's work on radio waves (radar is actually an acronym for RAdio Detection And Ranging). History buffs may be interested in this tracing of the family tree of radar (opens in a new window), but the advent of using radar to detect precipitation began early in World War II. The United States, in a joint effort with Great Britain, advanced the design of radar by using microwaves, which, as you may recall, have a shorter wavelength than radio waves.

This shift to shorter wavelengths provided more precision in detecting and locating objects relative to the microwave transmitter. Without realizing it, the shift from radio waves to microwaves paved the way for using radar to detect the presence and range of not only enemy aircraft, but squadrons of airborne raindrops, ice pellets, hailstones or snowflakes as well. Like generations on a family tree, the patriarch World War II radars, which were used to detect precipitation as a wartime afterthought, were the forefathers of the WSR-57 radars utilized by the National Weather Service (WSR stands for "Weather Surveillance Radar" and the "57" refers to 1957, the first year they became operational). This image, taken from a WSR-57 radar (opens in a new window), which looks rather crude by modern standards, shows the pattern of precipitation in Hurricane Carla near the Texas Coast on September 10, 1961. The yellow arrow in the north-east quadrant of the storm points to the location where a tornado occurred near Kaplan, Louisiana.

The next generation of radars, appropriately tagged with the acronym, NEXRAD for NEXt Generation RADars, became operational in 1988, and are still in use today. Weather forecasters often refer to one of these radars as a WSR-88D. The "WSR" is short for "Weather Surveillance Radar," the "88" refers to the year this type of radar became operational and the "D" stands for "Doppler," indicating the radar's capability of sensing horizontal wind speed and direction relative to the radar.

So, ultimately, how do radars work? Well, for starters, radar is an active remote sensor, unlike the satellite-based sensors we've just covered. While radiometers sit aboard satellites orbiting in space and passively accept the radiation that comes their way from Earth and the atmosphere, the antenna of a WSR-88D (opens in a new window), housed inside a dome, (opens in a new window) transmits pulses of microwaves at wavelengths near 10 centimeters. Once the radar transmits a pulse of microwaves, any airborne particle lying within the path of the transmitted microwaves (e.g. bugs, birds, raindrops, hailstones, snowflakes, ice pellets, etc.) scatters microwaves in all directions. Some of this microwave radiation is back-scattered or "reflected" back to the antenna, which "listens" for "echoes" of microwaves returning from airborne targets (see the animation below).

A map showing Ground IR on the southern plains of the United States

Pulses of microwave energy transmitted by a Doppler radar intercept airborne "targets" (precipitation particles, birds, bugs, etc.). Some of the energy back-scatters to the radar receiver, where the strength of the return signal and the time it took the transmitted signal to return are then processed and used to create images of radar reflectivity.
Credit: David Babb

The radar's routine of transmitting a pulse of microwaves, listening for an echo, and then transmitting the next pulse happens faster than a blink of an eye. Indeed, the radar transmits and listens at least a 1000 times each second. But, like a friend who's a good listener, the radar spends most of its time listening for echoes of returning microwave energy. In one hour, the radar transmits pulses of microwaves for a grand total of only seven seconds. It spends the other 59 minutes and 53 seconds listening for echoes from targets.

The radar's antenna has to have a really "good ear." Indeed, by the time a radar pulse scatters back to the radar antenna, it's only a relative whisper because the power typically drops to less than few milliwatts (after being sent out with a peak power of 100-500 kilowatts). These units of power are a bit cumbersome to work with, so meteorologists convert the power of the returning radar signal (in milliwatts) to an alternative measure of echo intensity that's appropriately called reflectivity with units of dBZ (which stands for "decibels of Z"), which is a logarithmic measure of reflectivity (check out this Wikipedia article (opens in a new window) if you want to learn more about dBZ). Without getting into too much detail here, the bottom line is that the value of dBZ increases as the strength (power) of the signal returning to the radar increases.

To pinpoint the position of an echo relative to the radar site (within the circular range of the radar), the target's linear distance and compass bearing (opens in a new window) from the radar must be determined. First, realize that the transmitted and returning signals travel at the speed of light, so by measuring the time of the "round trip" of the radar signal (from the time of transmission to the time it returns), the distance that a given target lies from the radar can be determined. For example, it takes less than two milliseconds for microwaves to race out a distance of 230 kilometers (143 miles) and zip back to the radar antenna (143 miles represents the standard range of radars operated by the National Weather Service, although they can "see" farther than that with less detail).

A representative image of radar reflectivity.

A representative image of radar reflectivity indicates the standard range (230 kilometers) of each of the single-site weather radars operated by the National Weather Service. Imagine the purplish line sweeping around and completing a circle such that each single-site image of radar reflectivity displays a "circle of echoes." The data for this image came from the radar at Oklahoma City, Oklahoma at 0045Z on May 7, 2024.
Credit: NCAR

How does the radar know the direction or bearing of the target relative to the radar? First, In order to "see" in all directions, the radar antenna rotates a full 360 degrees at a speed usually varying from 10 degrees to as much as 70 degrees per second. A computer keeps track of the direction that the antenna is pointing at all times, so when a signal is received, the computer calculates the reflectivity, figures out the angle and distance from the radar site, and plots a data point at the proper location on the map. Believe it or not, all of this happens in just a fraction of second!

To wrap up our discussion on the how radar works, we need to talk about how high in the atmosphere radar signals come from. A common misconception is that all radar signals come from rain (and other targets) near the ground, but this is incorrect because the radar typically does not transmit its signal parallel to the ground. Indeed, the standard angle of elevation is just 0.5 degrees above a horizontal line through the radar's antenna (see the schematic below); however, some NEXRAD units can scan at even smaller angles of elevation if local terrain allows. Either way, the radar "beam" (signal) is initially not much higher above the ground than the radar itself, but with increasing distance from the radar, the radar "beam" gets progressively higher above the ground (and its width increases). Check out the diagram below. At a 0.5 degree scanning angle and at distance of 120 km, the radar beam is over 1 km above the surface (nearly 3,300 ft). Near the maximum range of 230 km, the radar beam is at twice that altitude.

Graphic to show the height and width of a radar and how they increase with increasing distance from the radar site. See text for more information.

The height and width of a radar "beam" increase with increasing distance from a given radar site (assuming the Earth is flat). For a NEXRAD base elevation scan of 0.5 degrees, a close approximation for the variation in the height of beam (above ground) is a rise of one kilometer for every 120 kilometers in horizontal distance from the radar site.
Credit: David Babb

For simplicity, the calculations in the diagram above assume that the Earth is flat, and when accounting for the curvature of the Earth, the altitude of the radar beam at greater distances from the radar becomes even higher than the calculations above would suggest! What are the impacts of this increasing elevation with distance from the radar? First, you should realize that radar imagery often shows reflectivity from the precipitation targets within a cloud, and not necessarily what is falling out of the cloud. If you don't realize this fact, you can sometimes get confused when looking at radar imagery. For example, often when light precipitation falls into a layer of dry air below, it evaporates entirely before reaching the ground. Yet, it may look like it's precipitating on a radar image because the radar "sees" the precipitation at the level of the cloud.

Secondly, you should realize that radar signals are not typically obstructed by geography at distances more than, say, 25 miles from the radar (the beam is more than 1,100 feet off the ground at that point). The only exception to this rule is that there are certain locations, particularly in the western United States, where the tall mountains of the Rockies can block portions of the radar beam. Check out this image showing the coverage of the NEXRAD radars (opens in a new window) for the U.S. Note how some of the "circles of echoes" in the west look like somebody took a bite out of them. The irregular radar coverage over the western U.S. is a direct result of the mountainous terrain blocking some of the radar "beams."

At most sites, however, less than 25 miles from the radar site, a collection of stationary targets called "ground clutter" including buildings, hills, mountains, etc., frequently intercepts and back-scatters microwaves to the radar. Computers routinely filter out the common ground clutter so that radar images don't lend the impression that precipitation is always occurring around the radar site. To do this, radar images on clear days pinpoint surrounding buildings and hills, giving meteorologists a precipitation-free template to artificially filter out regular ground clutter. Still, you'll sometimes find ground clutter on radar images. For example, note the stationary echoes on this radar loop (opens in a new window) from the NEXRAD near State College, PA. While areas of actual rain showers move during the loop, the stationary echoes come from a wind farm (opens in a new window) atop one of the ridges of Central Pennsylvania.

So, now that you know how radar works, what determines the strength of the returning radar signal? And, how do you interpret the rainbow of colors on radar images? We'll cover these questions in the next section. Before continuing, however, please review these key facts about radar imagery.

Radar imagery...

  • originates from ground-based sensors (not from satellites) that actively emit pulses of radiation.
  • uses the microwave part of the electromagnetic spectrum (not the infrared).
  • usually displays the variable "reflectivity" (units dBZ) which is the measure of the amount of signal returned to the radar from the original transmitted pulse.
  • can help forecasters identify areas of precipitation.
  • cannot tell you anything about cloud top temperature, cloud height, or cloud thickness.

Explore Further...

There are many flavors of radar data available on the Internet (as well as on your mobile devices). Despite this variety, you should understand that the "raw" data all primarily comes from the same place -- the network of NEXRAD radars operated by the National Weather Service. Here are some websites to get you started...

NOAA/National Weather Service: National Radar Mosaic (opens in a new window)

NCAR Realtime Weather: Single-site, National Mosaic and 5-day archive (opens in a new window)

College of DuPage Radar: Includes both a national mosaic (opens in a new window), and single-site images. In the menu on the left, you can switch from the national mosaic to single-site radars via "Dual Pol NEXRAD". The single-site interface allows you to choose your location and product, even including scans from other elevation angles. Many of the products are beyond the scope of this course, but you're welcome to explore.

NEXRAD Data Inventory Search: If you're a real "data-hound" and want access to the full suite of archived radar data (opens in a new window), this site is for you! This site is not for the technical faint of heart, but you can retrieve all of the Level-2 and Level-3 data produced by the NEXRAD system. Needless to say, much of the data is beyond the scope of this course, but you're welcome to play with it. Note that you will also need to download/install NOAA's Weather and Climate Toolkit (opens in a new window) to view the files.

mjg8

Radar, Part 2: Interpreting Radar Images

Radar, Part 2: Interpreting Radar Images

Prioritize...

At the completion of this section, you should be able to list and describe the three precipitation factors that affect radar reflectivity, and use them to interpret radar images. You should be able to explain why hail causes very large reflectivity values while snow tends to be under measured. You should also be able to explain the difference between "base reflectivity" and "composite reflectivity."

Read...

Now that you know how a radar works, we need to discuss how to properly interpret the returned radar signal. As with any remote sensing tool, we have to understand what factors influence the amount of radiation that is received by the instrument. As you recall, radar works via transmitted and returned microwave energy. The radar transmits a burst of microwaves and when this energy strikes an object, the energy is scattered in all directions. Some of that scattered energy returns to the radar and this returned energy is then converted to reflectivity (in dBZ). Ultimately, the intensity of the return echo (and therefore, reflectivity) depends on three main factors inside a volume of air probed by the radar "beam":

  • the size of the targets
  • the number of targets
  • the composition of the targets (raindrops, snowflakes, ice pellets, etc.)

Allow me to elaborate a bit on each of these factors impacting radar reflectivity. For starters, the size of the precipitation targets always matters. The larger the targets (raindrops, snowflakes, etc.,) the higher the reflectivity. By way of example, consider that raindrops, by virtue of their larger size, have a much higher radar reflectivity than drizzle drops (the tiny drops of water that appear to be more of a mist than rain). Secondly, the power returning from a sample volume of air with a large number of raindrops is greater than the power returning from an equal sample volume containing fewer raindrops (assuming, of course, that both sample volumes have the same sized drops). The saying that "there's power in numbers" certainly applies to radar imagery!

To see how the size and number of targets impact reflectivity, consider this example. Many thunderstorms often show high reflectivity on radar images, with passionate colors like deep reds marking areas within the storm with a large number of sizable raindrops. A large number of sizable raindrops falling from a cumulonimbus cloud also leads to high rainfall rates at the ground typically. Thus, high radar reflectivities are usually associated with heavy rain.

Radar reflectivity image from 1351Z on June 1, 2012.

The line of high reflectivity values approaching State College, PA denotes large numbers of large rain drops (often characteristic of thunderstorms).
Credit: NOAA

The radar image above shows a line of strong thunderstorms (called a "squall line") approaching State College, Pennsylvania from the northwest, with radar reflectivity exceeding 55 dBZ in some areas. Such high reflectivities are typically associated with very heavy rainfall, but inferring specific rainfall rates from radar images can be tricky business. A given reflectivity can translate to different rainfall rates, depending on, for example, whether there are a lot of small drops versus fewer large drops.

The presence of large hail (opens in a new window) in thunderstorms can really complicate the issue of inferring rainfall rates from radar reflectivity even more. Typically, radar reflectivity from a thunderstorm is greatest in the middle levels of the storm because large hailstones have started to melt as they fall earthward into air with temperatures greater than 0 degrees Celsius (the melting point of ice). Covered with a film of melt-water, these large hailstones look like giant raindrops to the radar and can have reflectivity values higher than 70 dBZ. The bottom line is that higher reflectivity usually corresponds to higher rainfall rates, but the connection is not always neat and tidy.

Okay, lets move on to the final controller of radar reflectivity -- composition. The intensity of the return signal from raindrops is approximately five times greater than the return from snowflakes that have comparable sizes. Snowflakes have inherently low reflectivity compared to raindrops, so it's easy to underestimate the area coverage and intensity of snowstorms if you're unaware of this fact. It might be snowing quite heavily, yet radar reflectivity from the heavy snow might be less than from a nearby area of rain (even if the rainfall isn't as heavy) because the return signal from raindrops is more intense.

There's another way that moderate to heavy snow falling within the range of the radar can be camouflaged. Indeed, precipitating stratiform clouds are often shallow (not very tall), which means that the radar beam will sometimes overshoot snow-bearing clouds (opens in a new window) located relatively far away from the radar site. To see what I mean, check out the short video (1:40) below.

PRESENTER: Let’s look at an example of how radar imagery can sometimes be misleading when snow is falling. This is a reflectivity image from the radar located in Cleveland, Ohio, and from this image, it might be tempting to think that heavy snow might be limited to here east of Cleveland where reflectivities are around 35 dBZ. At greater distances from the radar, reflectivity decreases to less than 10 dBZ at places like Toledo and Findlay.

But, because the precipitating stratiform clouds that produce snow are often shallow, the radar beam, which is increasing in elevation as it gets farther from the radar site, can sometimes overshoot snow-bearing clouds partially or entirely when they are located relatively far from the radar site. In other words, the radar scans the very tops of snow bearing clouds, where there are relatively few precipitation targets, or it misses them entirely, leading to either low reflectivity or no reflectivity at all.

Our radar image was from 12Z, and the meteogram from Findlay, Ohio showed that 12Z fell during a period of heavy snow in Findlay. So, it was snowing heavily at the time of our radar image.

Yet, our radar image showed reflectivity of less than 10 dBZ at Findlay. Findlay is located about 100 miles from the radar’s location in Cleveland, so it was far enough away that the radar beam was mostly overshooting the snow-bearing clouds, leading to conditions at the ground – heavy snow – that didn’t match our expectations from radar reflectivity.

Credit: Penn State

The fact that radar sometimes overshoots snow-bearing clouds can really challenge forecasters (sometimes with deadly consequences), as this short segment from Penn State's Weather World program (opens in a new window) illustrates (check it out if you're interested). To further complicate interpreting radar images, I point out that partially melted snowflakes present a completely different problem to weather forecasters during winter. When snowflakes melt, they melt at their edges first. With water distributed along the edges of the "arms" of melting flakes, partially melted snowflakes appear like large raindrops to the radar. Thus, partially melted snowflakes have unexpectedly high reflectivity. For much the same reason, wet or melting ice pellets (sleet) also have a relatively high reflectivity.

Therefore, during winter, radar images sometimes show a blob of high reflectivity embedded in an area of generally lower reflectivity. Often, this renegade echo of high reflectivity is partially melted snow or sleet, and it's a good idea to check surface observations to see whether the relatively intense echo is indeed partially melted snow or sleet, or an area of moderate to heavy rain. For example, check out this band of high reflectivity just south of St. Louis, Missouri (opens in a new window). Nearby Scott Air Force Base in Belleville, Illinois ("BLV" on the map) was in the midst of this band, and at the time of the radar image the Belleville meteogram (opens in a new window) showed a rather unusual current weather symbol, representing "snow pellets" (partially melted snowflakes that have refrozen). The bottom line is that forecasters must be careful interpreting radar images when snow might be falling.

Base Versus Composite Reflectivity

For a powerful thunderstorm that erupts fairly close to the radar, a scan at 0.5 degrees would likely intercept the storm below the level where the most intense reflectivity occurs. Such a single, shallow scan falls way short of painting a proper picture of the storm's potential. As a routine counter-measure, the radar tilts upward at increasingly large angles of elevation, scanning the entire thunderstorm like a diagnostic, full-body MRI.

The radar can tilt upward to angles of elevation as large as 19.5 degrees, as indicated in the figure below, which shows the elevation scans in a common "general surveillance" radar mode. But, the series of elevation scans shown below isn't the only option that National Weather Service NEXRAD units have; they are programmed with multiple scanning strategies to give forecasters the most useful data depending on the weather situation. A complete scan like the one shown below takes about 6 minutes, which means that under normal circumstances, forecasters must wait about 6 minutes to get a look at the newest radar scan at each elevation. But, during severe weather, forecasters desire more frequent low-elevation scans to better see what's happening in the lower parts of thunderstorms. So, the radar can be switched into "SAILS" mode (opens in a new window), which causes the radar to interrupt its scanning progression to give more low-level scans, providing forecasters with more frequent updates on the lowest elevation scan.

The elevation scans of the WSR-88D (NEXRAD) radar. More explanation in text.

The elevation scans of the WSR-88D (NEXRAD) in general surveillance mode.
Credit: NOAA's Radar Operations Center

On the image above showing how the radar can tilt upward at increasingly large angles, the numbers at the top represent the standard angles included as part of the general surveillance scan. Also note the colorful "beams," which represent the approximate width and length of the radar scan as a function of distance from the radar site. Again, note how wide the "beam" becomes at great distances from the radar.

Meteorologists describe the radar reflectivity derived from a single scan as base reflectivity, and the most common base reflectivity corresponds to the scanning angle of 0.5 degrees. The National Weather Service also provides images of composite reflectivity, which represents the highest reflectivity gleaned from all of the individual scan angles.

To see how one scan angle can have a higher reflectivity than another, consider the case of a severe thunderstorm. The storm's updraft, which is a fast, rising current of moist air that sustains the thunderstorm, is usually strong enough (25 meters per second or faster) to suspend a large amount of rain (and hail) aloft (opens in a new window). Meteorologists call the suspension of precipitation high in a thunderstorm precipitation loading. At this stage of the storm, the reflectivity high in the cumulonimbus cloud is much greater than the reflectivity lower in the cloud. So, a radar image created from composite reflectivity will likely display the higher dBZ level (more intense colors) than a radar image of base reflectivity. Eventually, of course, the rain intensity at lower altitudes (and the surface) will increase as rain and hail fall from the cloud (this will occur once the updraft can no longer support the weight of suspended water and ice).

For example, check out the image below. This graphic shows radar reflectivity plots of a garden-variety thunderstorm at four different scan angles. First, note that the core radar reflectivity on the upper-right panel (scan angle of 1.5 degrees) was higher than the core base reflectivity at 0.5 degrees (upper-left panel). Comparing the two images, we conclude that the heaviest precipitation was higher up in the thunderstorm at this time.

The radar reflectivity of a garden-variety thunderstorm at four different scan angles. More explanation in text.

The radar reflectivity of a garden-variety thunderstorm at four different scan angles. The upper-left panel shows the radar reflectivity at a scan angle of 0.5 degrees, the upper-right displays the radar reflectivity at a scan angle of 1.5 degrees, while the lower-left and lower-right panels correspond to scan angles of 2.4 degrees and 3.4 degrees respectively.
Credit: Used by permission, Gibson Ridge Software / National Weather Service

Note that the radar reflectivity markedly decreased at a scan angle of 2.4 degrees (lower-left panel). When the scan angle was set to 3.4 degrees (lower-right panel), the reflectivity all but vanished, indicating that there weren't many precipitation particles near the top of the storm.

Here's one last example of how composite reflectivity can be higher than base reflectivity. On August 30, 2023, Hurricane Idalia (opens in a new window) made landfall in northern Florida. The 1206Z composite reflectivity (on the left below) generally shows larger areas of 35 dBZ or more (yellows, oranges, and reds) compared to the corresponding base reflectivity image on the right. Furthermore, the base reflectivity image showed an area with no reflectivity within the storm's circulation southeast of the radar site, while composite reflectivity was as high as 20 to 30 dBZ in the same area (which the radar had detected during a higher elevation scan).

The composite and base reflectivities of Hurricane Idalia just after landfall in 2023.

(Left) The composite reflectivity of Hurricane Idalia just after landfall at 1206Z on August 30, 2023 from the radar in Talahassee, Florida. (Right) The base reflectivity at the same time. Note the much higher composite reflectivity in the area of the arrowhead.
Credit: National Weather Service

Composite reflectivity may not be representative of current precipitation rates at the ground, but it can show the potential if the precipitation causing the highest reflectivity (often well up into the cloud) can fall to the surface.You might think that this discussion is probably too much "inside baseball," but composite reflectivity is the mode of choice on regional or national mosaics (opens in a new window) that you frequently see on the Web, in mobile apps, and on television. So, the bottom line is to make sure that you know which type of radar product you are looking at before performing any kind of analysis.

Now you know the basics of interpreting radar imagery, and we're just about ready to wrap-up our lesson. Before you finish up, however, test your knowledge of basic concepts from this section in the Quiz Yourself section below. You may also be interested in the Explore Further section below, where you can find out more about some common radar products (precipitation-type images and satellite-radar composites) that you'll commonly encounter on television and online.

Quiz Yourself...

Feeling confident in your basic knowledge of radar interpretation? Take this quiz to see how you do. You'll need to apply these concepts on various assignments.

Explore Further...

Commonly, regional or national radar mosaics visually distinguish areas of rain from snow and mixed precipitation (any combination of snow, sleet, freezing rain, and/or rain) using different color keys. Note that rain, mixed precipitation, and snow each has its own color key in the regional radar mosaic below. While the exact methods for creating such images vary, they all start with radar reflectivity and often incorporate other radar products (opens in a new window) along with surface temperature and other lower tropospheric observations to give a "best guess" of precipitation type.

A radar image showing color-coded precipitation type. Georgetown, DE is highlighted on the map, located in the pink area of mixed precipitation.

A regional radar mosaic with color-coded precipitation type. Georgetown, DE is located within the pink stripe marking mixed precipitation.
Credit: WSI Corporation

The methods used to formulate this "best guess" for precipitation type aren't perfect, and not surprisingly sometimes the actual observed weather doesn't match the precipitation type shown on the radar image. For example, I've marked Georgetown, Delaware on the map, located within the pink stripe on the radar image, indicating that mixed precipitation was falling. But, the surface observations tell a different story. The Georgetown meteogram (opens in a new window) shows that light rain was falling at 15Z (the time of the radar image above).

Another common product is a satellite-radar composite, or "sat-rad image" (see image below). For the record, sat-rad images are superimpositions of radar imagery onto satellite images. Before using or interpreting this type of image, make sure that you're aware of a few key things. First, the satellite and radar data come from two completely different sources, even though that might not be obvious from the "look" of the image. As you know, WSR-88D radars are located on the ground (not aboard geostationary satellites), which has some major implications for data coverage.

An example of a sat-rad image, which is radar imagery superimposed onto an infrared satellite image. The US is shown here.

The 1651Z infrared satellite image and the 1645Z radar mosaic on May 8, 2024.
Credit: Plymouth State Weather Center

Recall the range of the national array (opens in a new window) of WSR-88D radars? It does not extend very far out into the oceans nor very far north into Canada nor very far south into Mexico. Thus, to a novice user, sat-rad images can give the impression that some clouds are not producing precipitation when they really are. For example, note the area of clouds an precipitation over New England. According to this radar mosaic, the radar echoes ended very close to the New England Coast. Was it raining farther offshore? We can't tell from this image alone because anything farther away was beyond the range of U.S. radars, but this close-up sat-rad loop of the Northeast (opens in a new window) shows radar echoes suddenly disapearing offshore at seemingly circular boundaries in some cases -- a clear sign that the radar echoes weren't telling the full story because of the limited range of land-based radar.

These images can really be misleading to someone who's not fully aware of what they show, so make sure to use them with care!

mjg8