METEO 101: Sample Content

METEO 101: Sample Content

Looking for the lesson content?

This course material requires a user account. Registered METEO 101 students and alumni can log in by clicking on the "login" link above (Penn State userID and password required). If you are not yet enrolled, please see the information presented below to learn more about this course.

Quick Facts about METEO 101

METEO 101 is the first in a series of four online courses that comprise the Certificate of Achievement in Weather Forecasting (opens in a new window) program. This course also serves an entry point for prospective Meteo majors, and as a General Education Science/Lab course. It is offered every Fall (August - September), Spring (January - May), and Summer (May - August) semester.

Course Prerequisite(s): none

Why learn about weather forecasting?

A computer model prog.

A 24-hour computer forecast, often referred to as a "prog."
Credit: Penn State e-Wall

Imagine going online and accessing a forecast map generated by a computer (like the one on the right). Now imagine creating your very own weather forecast based on this tool. Sound far-fetched? Not at all! By successfully completing this course, you will be able to competently interpret and effectively use computer "progs" and other tools that professional weather forecasters look at every day.

Weather affects nearly every aspect of our daily lives. Indeed, sometimes the weather can even threaten our very lives themselves. One of the goals of this course is to teach you to be an informed weather consumer. These days, all sorts of weather data are available, on practically every electronic device we own. How do you make sense of it all? This course will teach you about the many different types of weather data, as well as some of the key processes behind the data. Our goal is to demystify the weather in a way so that you can apply the knowledge to your daily life. You may even find that you can tailor the weather forecast to suit your own personal situation, giving you an advantage at work or keeping you safe at play.

What will you learn in this course?

METEO 101 seeks to give you a better understanding of atmospheric structure and processes, so you can better apply the weather information you encounter. With this knowledge of how the atmosphere works, you'll be able to understand what controls the evolution of storms and appreciate why weather forecasts are sometimes highly uncertain. You will also learn to "read" the sky so you can make your own short-term forecasts and adjust your behavior accordingly. Here is a breakdown of what you will learn.

Lesson 1: An Introduction to Atmospheric Variables (observation times and procedures, time conversion and UTC, station models, temperature, dew point temperature, visibility and current weather, cloud cover, pressure, wind, and METARs)

Lesson 2: Data, Data Everywhere (reading and drawing contour maps, gradients, buoy/ship data, map projections, and meteograms)

Lesson 3: Remote Sensing of the Atmosphere (remote sensing versus in-situ measurements, electromagnetic spectrum, Stefan-Boltzmann Law and Wien's Law, radiation processes, albedo, polar orbiting versus geostationary satellites, visible imagery, IR imagery, water vapor imagery, and radar data)

Lesson 4: Controllers of Air Temperature (seasonal changes, local climatic controllers, surface energy budget, conduction and convection, advection, nocturnal inversion, and latent heat)

Lesson 5: Controllers of the Wind (displaying wind data, PGF and Coriolis forces, geostrophy, friction, wind direction from pressure maps, centers of high and low pressure, surface troughs and ridges, convergence and divergence)

Lesson 6: Vertical Variations in Temperature (skew-T diagrams, profiles of temperature and dew point, mixing ratio and relative humidity, clouds, and precipitation on a skew-T, dry versus moist ascent, stability, identifying the boundary layer and tropopause, orographic lift, statiform versus convective clouds)

Lesson 7: Patterns of Pressure and Wind Aloft (constant pressure surface, decrease in pressure with height versus temperature, upper-level troughs and ridges, upper-level winds, jet stream and jet streaks, and clear air turbulence)

Lesson 8: Upper-level Winds and Their Roles in Surface Highs and Lows (vorticity (relative, earth, and absolute), short waves, vorticity extremes, conservation of angular momentum for a parcel, convergence/divergence effect on surface pressure)

Lesson 9: The Cyclone Model (air masses and fronts, mid-latitude cyclones (initiation, self-development, occlusion, decay), conveyor belts, and propagation)

Lesson 10: Numerical Weather Prediction (computer simulations, model errors, types of weather models, interpreting common model progs, forecasting strategies, medium-range and ensemble forecasting)

Lesson 11: Forecasting High and Low Temperatures (Model Output Statistics (MOS), reading MOS output, MOS biases, the 850-mb method, climatology, and persistence forecasting)

Lesson 12: Forecasting Precipitation (PoP, model generated QPF, forecasting snow amount, forecasting sleet and freezing rain, and precipitation forecast data sources)

Course Objectives

In short, when you successfully complete this course, you will be prepared to:

  • Analyze and interpret conventional maps of surface and upper-air data, meteorological images, and soundings on a thermodynamic diagram. (1)
  • Demonstrate a fundamental knowledge of the basics by which atmospheric observations are taken, both in situ and remotely. (2)
  • Describe the processes by which synoptic-scale weather systems form, grow, and dissipate. (3)
  • Explain the fundamental forces that drive atmospheric motions, both in the horizontal and vertical. (4)
  • Apply the basics underlying weather forecasting and numerical weather prediction to create simple, point-forecasts for basic weather variables. (5)

How does this course work?

All course materials are presented online. The course lessons include many animations and interactive tools to provide a tactile, visual component to your learning. Your instructor will assess your progress through online quizzes, lab exercises, and projects, all of which focus on your ability to analyze key observational and forecast information regarding current or past weather events. Most deadlines in this course occur every week on Friday night. You should expect to spend 8 to 10 hours per week studying the lesson material and completing assignments to stay on pace.

mjg8

Lesson 1. An Introduction to Atmospheric Variables

Lesson 1. An Introduction to Atmospheric Variables

Motivate...

Young child standing next to an instrument shelter

A cooperative weather station at Granger, Utah (circa 1930).
Credit: NOAA Photo Library

As we start our journey of learning about weather forecasting, we'll start with weather observations. In short, we wouldn't be able to make reasonably accurate weather forecasts without them! Like many scientists, meteorologists rely on observations, and our "science lab" is the atmosphere! In the United States, meteorologists have armies of technical assistants that regularly collect observations, including thousands of "cooperative observers (opens in a new window)" that volunteer to observe daily precipitation and maximum and minimum temperatures in their hometowns. Thousands more collect daily precipitation data as part of the Community Collaborative Rain, Hail, and Snow (CoCoRAHS) (opens in a new window) network, for example.

These ordinary citizens (many of whom are weather enthusiasts) provide crucial data that supplement the National Weather Service's primary network of observations (taken at approximately 1,500 airports across the nation). At these "primary" airports, however, trained government observers or automated weather instruments (opens in a new window) are responsible for collecting routine weather observations. The set of routinely collected measurements includes temperature, moisture, air pressure, wind direction, wind speed, cloud cover, visibility, precipitation and several other atmospheric variables.

These observations form our understanding of how the atmosphere is "behaving" at any given moment and form the basis of weather forecasts. In this lesson, you will learn about some key weather variables and why forecasters are interested in them, as well as learn about how all of these observations can be easily displayed on weather maps. By gaining insight about the atmosphere's present state, you will take the first step toward fashioning your own weather forecasts, or at the very least, having more context for the weather forecasts you may see on television, on the web, or via your favorite mobile weather app.

Lesson Objectives

After completing this lesson, you should be able to:

  • Explain when the standard hourly observations are collected and for what hour a particular observation qualifies based on its time stamp. (2)
  • Convert times displayed on weather maps in GMT/UTC to a station's local time (and conversely, be able to convert a station's local time to GMT/UTC using appropriate nomenclature).(1)
  • Identify the temperature variable (with proper units) from a station model and convert the observation to other units.(1)
  • Identify, decode, and interpret the visibility observation on a station model (if displayed).(1)
  • Explain when an "obstruction to visibility" symbol (that is, present weather) must be listed on a station model, and identify and decode the "present weather" symbol (if shown).(1)
  • Identify and explain the dew point temperature variable (with proper units) from a station model.(1)
  • Interpret a station model's sky coverage symbol, giving the official cloud coverage classification and fractional equivalent.(1)
  • Identify and decode the sea-level pressure variable from a station model.(1)
  • Express the wind direction and speed (including the units) for a given station model "flag."(1)

(Numbers denote mapping to course objectives)

mjg8

Making Observations of the Atmosphere

Making Observations of the Atmosphere

Prioritize...

By the time you are finished reading this page, make sure that you understand when standard hourly observations are collected and for what hour a particular observation qualifies based on its time stamp.

Read...

Forecasters worth their salt routinely use current weather and recent history as the basis for predicting the future. That's because current and past weather can, and often does, offer clues about how the atmosphere will evolve. During winter and early spring, for example, powerful Pacific storm systems that make news on the West Coast by spawning heavy coastal rains and mountain snows often make news a few days later when they arrive over the Middle West, generating fierce thunderstorms that can spawn tornadoes (opens in a new window).

However, even in more benign weather patterns, conscientious forecasters routinely study weather conditions "upstream" of their location (by upstream, I mean where weather systems are coming from), hoping to extrapolate these conditions into the future to get a more accurate beat on the local weather forecast. There's a big payoff to forecasters who are sticklers for such details. Indeed, the wealth of surface observations taken hourly across the nation often tips the atmosphere's hand and gives meteorologists a leg up on important clues to the weather forecast.

At all U.S. airports, standard hourly weather observations are taken once each hour, typically several minutes before the top of the hour. So, for example, the standard 3:00 PM observation might have a time stamp such as 2:53 PM. More formally, standard hourly weather observations are issued between 50 minutes past the hour and the top of the next hour, so a standard 3:00 PM observation could be time stamped between 2:50 PM and 3:00 PM. When weather conditions rapidly change, however, you'll often see special observations, known as SPECI reports, at other times. A "special ob" taken at 3:15 PM, for example, falls under the umbrella of the 3:00 PM observation, even though the standard observation was taken a little before 3:00 PM. In general, all observations time-stamped between (hh-1):50 to hh:49 are part of the hh observation cycle (hh represents any given hour). So, continuing with our example, any observation time-stamped between 2:50 PM and 3:49 PM belongs to the 3 PM observation cycle. The 4:00 PM observation cycle begins at 3:50 PM, and so on.

A collection of weather instruments alongside of a runway

An automated observing system at the airport in Elko, Nevada. Many airports in the United States use the Automated Surface Observing System (ASOS). Read more about ASOS (opens in a new window).

As you might expect, there's an avalanche of surface weather observations each hour from all the airports across the country (and across the world, for that matter). In order to simplify life and create easy-to-read weather maps, the National Weather Service organizes hourly observations onto templates called station models. In the remainder of this lesson, you'll learn how to decode surface station models (and thus determine local weather conditions). However, before we tackle the rules and conventions for decoding station models, you'll need to know how weather observers all over the world synchronize their watches in order to standardize the times that weather observations are taken.

Quiz Yourself...

Try your hand at the questions below to make sure you have a handle on observation times.

Explore Further...

If you want to look ahead, here's the most recent surface map (opens in a new window) of station models for the contiguous states. Please note that the map was purposely designed to include a limited number of station models (a map with all the station models would be very cluttered). We'll work on decoding station models later in the lesson, but If you want to skip ahead and try decoding a few on your own, check out this explanation on decoding station models (opens in a new window) from the Weather Prediction Center.

mjg8

Does Anybody Really Know What Time It Is?

Does Anybody Really Know What Time It Is?

Prioritize...

It's critical that you understand universal time conventions and be able to convert between universal time (aka UTC, GMT, or Z-time) and local time zones and vice versa. You will use this skill throughout the course, so make sure you are comfortable making such conversions before moving on.

Read...

"Does anybody really know what time it is? Does anybody really care...?"

Those words come from this section's theme song, a classic from the musical vault—"Does Anybody Really Know What Time It Is (opens in a new window)" by Chicago. Well, I can tell you that meteorologists must know what time it is, and they definitely care about time. Weather is a global phenomenon, and since our world is sliced into individual time zones, meteorologists need a universal standard to keep it all straight.

That standard is Greenwich Mean Time (GMT). "Greenwich" refers to the English village of Greenwich, a borough of London, through which the Prime Meridian (opens in a new window) (zero degrees longitude) passes. The advantage of adhering to one time standard is that observers all over the world can record weather conditions in Greenwich time. Such a universal time system is indispensable for synchronizing when weather observations are collected. If observers worldwide were to record observations in local time, then interpretation would become much more complicated and confusing. Ultimately, it's important to remember that GMT is a time zone, just like any other. It just happens to be the time zone at Greenwich, England, along the Prime Meridian.

GMT goes by a couple of other aliases--"Zulu time" (often shortened to Z-time), or UTC (Coordinated Universal Time). "Zulu" is a funny sounding name, but it's the U.S. Navy's and our civil aviation's version of GMT. The bottom line is that if you see time expressed as GMT, Z-time, or UTC, they're all referring to the same thing--the time in Greenwich, England. Most often, we'll use UTC or Z-time in this course. Meteorologists universally use this time to synchronize the times of weather observations and forecasts, so it's important for us to be able to convert from UTC to other local time zones, as well as from other local time zones to UTC.

You can convert to Local Time at any location by referring to a map of world time zones (opens in a new window) (zones are labeled along the bottom of the map). That's a pretty "busy" map, so let's streamline our discussion a bit. Focus your attention on the map of standard time zones for a large portion of the Western Hemisphere (shown below). Further note that each time zone is labeled with its corresponding time difference from Greenwich, England (expressed in hours UTC). How does this map work?

Time zone map for a large portion of the Western Hemisphere

The standard time zones of a large portion of the Western Hemisphere and their corresponding time differences from Greenwich, England (here expressed in hours UTC). Assuming that it’s 1500 hours local time in Greenwich (alternatively, 15 UTC or 15Z), on a 12-hour clock, it would be 3 P.M. local time in Greenwich. Across the top of the map are the corresponding local times at 15Z for each of the represented time zones. For example, at 15Z (1500 hours in Greenwich), it’s 1000 hours or 10 A.M. local time in the eastern United States (Eastern Standard Time), and 0600 hours or 6 A.M. local time in Alaska (Alaska Standard Time). Larger image of time zone map. (opens in a new window)
Credit: David Babb

First, we're using the military's 24-hour clock system (opens in a new window). For this system, 0000 hours ("zero hundred hours") corresponds to local midnight, and 1200 hours ("12 hundred hours") represents local noon. Okay, let’s assume that it’s 1500 hours in Greenwich (alternatively, 15 UTC, 15Z or 15 GMT...take your pick!). On a 12-hour clock, the local time in Greenwich would be 3 P.M. At any rate, you can see, across the top of the colorful map above, the corresponding local times at 15Z for each of the represented time zones. For example, at 15Z (1500 hours in Greenwich), it’s 1000 hours (10 A.M.) local time in the eastern United States (Eastern Standard Time is UTC - 5 hours), and 0600 hours (6 A.M.) local time in Alaska (Alaska Standard Time is UTC - 9 hours).

On the flip side, if you lived in Chicago, Illinois and it was 9 A.M. local time (0900 hours), and you wanted to convert to UTC, you would simply add 6 hours because Central Standard Time (where Chicago is located) is 6 hours behind UTC. So, 0900 hours + 6 hours = 1500 hours, or 15 GMT (or 15 UTC or 15Z).

Ultimately, converting from UTC to local time (or the other way) is really no different than figuring out what time it is in California if you live in, say, New York. If it's 5 P.M. local time in New York, we have to subtract 3 hours to get the local time on the West Coast in California, so we know its 2 P.M. local time in California. Converting to or from UTC is no different: It's just addition or subtraction. You have to figure out how many hours difference there is between whatever location you're interested in and UTC.

Many of the time-zone boundaries are parallel to longitude lines, although, for convenience, there are several exceptions (Alaska, for example). Each time zone spans approximately 15 degrees of longitude, which is the longitudinal distance that the Earth rotates in one hour. Of course, you must adjust for Daylight Saving Time (opens in a new window) during the warmer months (from the second Sunday in March to the first Sunday in November in the United States). While 15 UTC corresponds to 10 A.M. Eastern Standard Time (EST) in New York City, from early March to early November it's 11 A.M. Eastern Daylight Time (EDT) in the New York (Eastern Daylight Time is only 4 hours behind UTC). So, when Daylight Saving Time is in effect, the difference between UTC and time zones in the U.S. is one hour less than what's indicated on the map above. By the way, it is bad form to say "Daylight Savings Time." Save yourself the trouble, and don't put the "s" on the end of "saving."

Want to see a few quick examples of time conversions between UTC and local time zones? Check out the short video (4:07) below:

Let’s do some sample time conversions between Universal Time and local time zones in the U.S. Let’s start in Cincinnati, Ohio, which is in the eastern time zone. According to our time zone map, the eastern time zone is 5 hours behind UTC, so we have to subtract 5 hours to make the conversion. If we imagine that it’s 13Z on January 5, which means Daylight Saving Time is not in effect, we subtract 5 hours from 1300Z, and that gives us 0800 hours on a 24-hour clock, which is 8 A.M. Eastern Standard Time. If we were doing the same conversion from 13Z, but it was on June 5, when Daylight Saving Time is in effect, there would be a slight difference. We would start the same way, subtracting 5 hours to get 0800 hours, but because Daylight Saving Time is in effect, local clocks have jumped an hour ahead, so we add that hour to get 0900 hours, or 9 A.M. Eastern Daylight Time. In effect here, during Daylight Saving Time, we’re really subtracting 4 hours instead of 5 when Daylight Saving Time is in effect.

Let’s go over to the Central Time Zone and St. Louis, Missouri. According to our time zone map, the central time zone is 6 hours behind UTC, so we have to subtract 6 hours to make the conversion. If it’s 04Z on February 10, when Daylight Saving Time is not in effect, we subtract 6 hours from 0400Z. In doing so, we have to cross midnight local time so the date will change. It’s 4 hours to get back to midnight, and we still have to subtract 2 more hours for a total of 6. That gives us 2200 hours, or 10 P.M. Central Standard Time on February 9 in St. Louis. If we do the same conversion on June 10 when Daylight Saving Time is in effect, remember that we have to add in 1 hour, which gives us 11 P.M. Central Daylight Time on February 9. So, effectively, during Daylight Saving Time, we only have to subtract 5 hours to make our conversion for St. Louis.

If we’re in the mountain time zone at Salt Lake City, our time zone map says that mountain time is 7 hours behind UTC, so we have to subtract 7 hours to make the conversion. If it’s 1900Z on December 20, there’s no Daylight Saving Time in effect. We subtract 7 hours from 19Z to get 1200 hours, or 12 noon Mountain Standard Time. If we do the same conversion in July when Daylight Saving Time is in effect, again, we start the same way, but we have to add in the hour for Daylight Saving Time, which gives us 1300 hours on a 24-hour clock, or 1 P.M. Mountain Daylight Time. Effectively, we only have to subtract 6 hours during Daylight Saving Time to make this conversion.

Remember that Z-time, or UTC time, is universal. So, if it’s 15Z, that converts to 10 A.M. Eastern Standard Time, 9 A.M. Central Standard Time, 8 A.M. Mountain Standard Time, and 7 A.M. Pacific Standard Time. All of these local times occur at 15Z.

Finally, what if we need to convert the other way – from local time to UTC? Let’s do a quick example at Cincinnati, which is 5-hours behind UTC during standard time in the eastern time zone. If it’s 7 A.M. Eastern Standard Time on January 15, then we need to add 5 hours to local time to make the conversion. That’s 0700 hours plus 5 hours to get to 1200 hours, or 12Z. If we had to make the same conversion at 7 A.M. Eastern Daylight Time in June, then we end up having to add an hour less to make the conversion, and 0700 hours plus 4 hours gives us 11Z.

Credit: Penn State

Please note that the International Date Line (opens in a new window) zig-zags across the Pacific Ocean in an attempt not to inconvenience local time keeping (traveling westward across the date line results in the calendar advancing one day). For convenience, the abrupt zig-zag in the International Date Line south of Siberia allows Alaska's long Aleutian Island chain to be in the same time zone as the rest of the state (Alaska Standard Time, AST, is 9 hours behind UTC).

Now that you know how time conversions work, the best way to really get comfortable with knowing what time it is anywhere in the world is to do some practicing. Make sure to spend some time on the Key Skill questions and the Quiz Yourself tool below.

Key Skill...

Here are a few examples for you to try (you'll likely need to refer to the map of time zones above)...

Example #1:

Say that it starts raining at your house in Denver, Colorado, and the time is 20Z on June 23. What was the local time in Denver when the rain started?

Answer: We notice from the map above that Denver is located in the UTC-7 time zone. However, since Daylight Saving Time is in effect (in June), Denver is only 6 hours behind UTC. So, if we subtract 6 hours from 20Z, we get 1400 local daylight time on June 23 (or 2:00 P.M. on June 23). Note that when talking about local time, we DO NOT have the "Z" or UTC designation (because we have converted from that time zone). When talking about local time, you should typically say "Local Standard Time" (LST) or "Local Daylight Time" (LDT).


Example #2:

You pull up a weather map on your favorite smartphone app at 10:35 P.M. local time on December 18 in New York, NY. What time stamp would be on this image if it was expressed in Z-time?

Answer: We notice from the map above that New York is located in the UTC-5 time zone, meaning that New York is 5 hours behind UTC. So to convert from local time to UTC, we need to add 5 hours. 10:35 P.M. can also be written as 2235 hours on a 24-hour clock, so 2235 + 5 hours = 0335Z. Since we crossed over local midnight when making our conversion, we also need to increment the date by one. Therefore, the time stamp on the image would be 0335Z on December 19.


Example #3:

You're vacationing on big island of Hawaii, and your plane lands at 03Z on January 3. What local time is this (in Hilo, Hawaii)?

Answer: We notice from the map above that Hawaii is located in the UTC-10 time zone. So, we must subtract 10 hours from 03Z, which gives us 1700 local standard time on January 2 (or 5:00 P.M. on January 2). Notice that we have to subtract a day because we passed 0000 (local midnight) when converting.

Quiz Yourself...

Think you understand how to convert between local time and GMT? Take this self-quiz below to see how you do. Select whether you want to practice converting local time to GMT or GMT to local time (or "Either"). Then hit the "Quiz me" button. Use the provided drop-down menus to fill in the missing time and date. Click "Submit" to check your answer. Make it a goal to get at least five in a row correct. If you can get five in a row, you've likely got the hang of things!

mjg8

Taking Temperature

Taking Temperature

Prioritize...

This page contains some important concepts about temperature. Make sure that you can discuss different temperature scales and identify / interpret the temperature on a station model.

Read...

While you probably think of temperature as "how hot or cold something is," that's a pretty ambiguous definition (since "hot" and "cold" are somewhat subjective). More precisely, temperature is a measure of energy. You see, air molecules are restless little lumps of matter, continually vibrating, wriggling and bumping into their many neighbors. As air temperature increases, the molecular dance becomes increasingly frenetic. At a temperature of 72 degrees Fahrenheit, the average speed of air molecules is about 1,000 miles an hour, which translates into ample kinetic energy (energy of motion). Thus, air temperature is a measure of the average kinetic energy of air molecules (air consists mostly of nitrogen and oxygen molecules (opens in a new window)).

In the United States, we typically express temperature using the Fahrenheit temperature scale (opens in a new window), but most countries in the world use the Celsius temperature scale (opens in a new window) (undoubtedly, you've heard temperature expressed in "degrees Fahrenheit" or "degrees Celsius" before). Undoubtedly, you'll encounter instances when you need to convert between the two scales. In those circumstances, the National Weather Service temperature conversion calculator (opens in a new window) is great!

To give you some weather context, the North American all-time marks for highest and lowest temperatures are, respectively, 134 degrees Fahrenheit (56.7 degrees Celsius) in California's Death Valley (see the photograph below), and -81.4 degrees Fahrenheit (-63 degrees Celsius) at the village of Snag (near Beaver Creek) in the Yukon Territory of Canada (opens in a new window). If you're interested in current global temperature extremes, this website summarizes the extremes (opens in a new window) from all the hourly weather observations around the world.

Zabriskie's Point, Death Valley, California.

The stark but beautiful landscape of Death Valley, California, from Zabriskie's Point.

You may also be familiar with some other common temperature markers:

  • 100 degrees Celsius (212 degrees Fahrenheit) is the boiling point of water
  • 37 degrees Celsius (98.6 degrees Fahrenheit) corresponds to normal body temperature
  • 22.2 degrees Celsius (72 degrees Fahrenheit) represents the "ideal" room temperature
  • 0 degrees Celsius (32 degrees Fahrenheit) is the melting point of ice

Note that I referred to 0 degrees Celsius (32 degrees Fahrenheit) as the melting point of ice, and not the freezing point of water. That phrasing was chosen deliberately! Indeed, ice melts at 0 degrees Celsius, but not all water freezes at 32 degrees Fahrenheit! For more details, check out the Explore Further section below, but this fact has some important consequences for how precipitation forms in clouds that we'll get into later in the course. So, when you hear that 0 degrees Celsius or 32 degrees Fahrenheit is the freezing point of water, keep in mind that it's not usually true.

By the way, there are other temperature scales besides Celsius and Fahrenheit. For example, there's the Kelvin scale (opens in a new window) (sometimes called the absolute temperature scale). Please note that the number of kelvins = the number of degrees Celsius + 273.15. So, the melting point of ice is 273.15 kelvins and the boiling point of water, at standard pressure, is 373.15 kelvins (100 degrees Celsius or 212 degrees Fahrenheit). For the record, it's bad form to say "degrees kelvin." Indeed, the proper way to express the units of absolute temperature is simply "kelvins." Also note that the word "kelvins" is never capitalized except where any word would be capitalized, such as at the beginning of a sentence. The Kelvin scale is used commonly in the physical sciences, and in fact it's the most direct way to describe the relationship between the average speed of air molecules and their temperature (higher temperatures = faster average molecule speeds).

Now that you know what temperature is, the next step is to be able to identify and interpret temperatures from a station model, which is covered in the Key Skill section below.

Key Skill...

See caption.

A sample of a station model with temperature (52 degrees Fahrenheit) annotated.
Credit: David Babb

In this lesson, you will be learning not only about some of the basic observed atmospheric variables, but you will learn how these variables are represented on a station model. As mentioned previously, station models are a graphical way of displaying the different types of data collected at each observing site. Figuring out the temperature from a station model is pretty straightforward. As shown in the sample station model on the right, the number located in the upper-left corner of the model is the station temperature expressed in degrees Fahrenheit (degrees Fahrenheit is the standard used on surface station models in the United States, but many other countries use degrees Celsius). In this case, the station temperature is 52 degrees Fahrenheit.

I also strongly recommend practicing with the interactive station model tool below (which we'll be coming back to throughout the lesson). You can alter the temperature (using the input field on the right) to see how the station model changes. The default setting is 72 degrees Fahrenheit, but if you change that number, you will see the number located in the upper-left corner of the station model change accordingly. Don't worry about the other numbers and symbols on the station model quite yet. We'll be covering those throughout the remainder of the lesson.

Finally, check out the most current surface observations (opens in a new window), and pick out three or four station models. You should be able to identify and interpret the temperature at each.

Explore Further...

In my opinion, the temperature that frequently causes the most confusion is 32 degrees Fahrenheit (0 degrees Celsius). For example, many people automatically assume that the air temperature has to be 32 degrees Fahrenheit or lower for precipitation to fall as snow. But, I've seen it snow at 44 degrees Fahrenheit in early spring! On the flip side, I've seen it rain when the air temperature was 11 degrees Fahrenheit in winter. Granted, the rain froze after it hit the ground, trees, and power lines (opens in a new window), etc. (photo credit: Steve Seman). We'll explore these mysteries regarding precipitation later in the course, so stay tuned!

Precipitation type isn't the only misconception surrounding 32 degrees Fahrenheit (0 degrees Celsius). Another is the idea that people "freeze to death." The dangerous implication of this myth is that you can't die unless the temperature is below 32 degrees Fahrenheit and that you die by turning into an ice cube! But, people don't freeze to death. People die of exposure or hypothermia (opens in a new window), and this affliction can occur when air temperatures are in the 40s or even the 50s, and death occurs when your core body temperature is far above 32 degrees Fahrenheit.

Finally, you may read articles or hear weather broadcasters refer to 32 degrees Fahrenheit as "freezing." Technically speaking, only pure water freezes at 32 degrees Fahrenheit. As it turns out, most ordinary water is "filthy" (dissolved impurities) and freezes at temperatures lower than 32 degrees Fahrenheit! For example, the average concentration of salt in seawater is about 3.5 percent. At this salinity, the freezing point of ocean water is about 28.5 degrees Fahrenheit. So, it's accurate to say that 32 degrees Fahrenheit is the melting point of ice, but it's not really the freezing point of water in most practical situations.

As a consequence, water can exist as a liquid at temperatures well below 32 degrees Fahrenheit. Check out the pair of photographs (below) documenting a home experiment. I placed water drops onto the bottom of an empty tin can and then shoved the can in a freezer for several minutes (the photograph on the left is the "before" picture and the photograph on the right is the "after" picture). Please note that some drops froze while others did not. I'll explain this discrepancy in a later lesson, but I just wanted you to see with your own eyes that water and ice can simultaneously exist at (and below) 32 degrees Fahrenheit.

Experiment with water drops placed on a tin can set in a freezer.

I carefully placed nine drops of water on a can (left) and put the experiment in my kitchen freezer. After several minutes, five drops had frozen and four had not. Lesson learned: Water can exist as a liquid at temperatures below 32 degrees Fahrenheit (0 degrees Celsius).
Credit: David Babb
mjg8

Viewing Visibility

Viewing Visibility

Prioritize...

This page details the atmospheric variable, visibility. Before leaving this page, make sure that you can do the following:

  • Describe what visibility is and the types of atmospheric conditions that can affect visibility.
  • Identify and decode the visibility observation on a station model (if displayed).
  • Identify when an "obstruction to visibility" symbol (that is, present weather) must be listed along with the visibility measurement on a station model.
  • Identify and decode the "present weather" symbol (if shown).

Read...

A veiw of the hazy mountain ridges in central PA.

Haze slightly obscures a ridge near the airport at University Park, Pennsylvania, on an early summer day.
Credit: David Babb

Meteorologists are very interested in horizontal visibility (the maximum distance away that an observer can see an object located near or on the ground), because it has major implications for transportation. Very poor visibility can cause major traffic accidents (opens in a new window) and airline catastrophes. Obviously, visibility is very important for pilots during take-off and landing, and if you want to see what it's like for a pilot to land in poor visibility, check out this video of a Boeing 737 landing in poor visibility (opens in a new window) in London (view from the cockpit).

Often, visibility can vary in the 360-degree panorama around a weather station. For example, there could be visibility-restricting snow showers just to the north and west of the station, disproportionately reducing visibility in those quadrants. To see what I mean, check out the panoramic view on a wintry day (opens in a new window) with recurrent, scattered snow showers in the vicinity of Penn State's main campus, and note that parts of the nearby ridges can't be seen because of snow showers. In such a situation, an observer reports a "representative visibility." On days when horizontal visibility dramatically varies over the 360-degree panoramic view around an airport or weather station, a trained weather observer determines a single visibility that reasonably describes more than half the 360-degree panorama. In more precise terms, a representative visibility is the greatest distance that objects can be observed and identified over more than 180 degrees of the panoramic view around an airport or weather station.

Horizontal visibility can run the gamut. On a perfectly clear day, you can't see forever, but visibility can reach approximately 100 miles in the mountainous West. On the other hand, visibility can lower to near zero in very dense fog (opens in a new window), fierce blowing and/or falling snow (opens in a new window), blowing sand/dust (opens in a new window), smoke (opens in a new window), etc. Automated weather stations, however, typically report the visibility as 10 miles when no obstructions to visibility are present.

Snow showers obscuring the distant mountain ridges in central PA

A snow shower reduces visibility along Tussey Ridge on the outskirts of State College, Pennsylvania.
Credit: David Babb

Obstructions to visibility (so called "present weather") such as fog, haze (opens in a new window), and smoke are considered non-precipitating obstructions to visibility, and their reductions to visibility can be very noticeable. For example, check out these photographs of the ridges south of Penn State's main campus on two different summer days -- one with a clean atmosphere (opens in a new window) and one with a hazy atmosphere (opens in a new window). For a dramatic example of a non-precipitating obstruction to visibility, which may be of special interest to aviators in particular, check out the Explore Further section below.

Precipitation can also reduce visibility by varying degrees. The degrees to which precipitation reduces horizontal visibility gives rise to a hierarchy of qualifiers such as light snow, moderate snow, and heavy snow. Indeed, when dealing with snow, the qualifiers of light, moderate, and heavy are actually defined in part by horizontal visibility. Rain can also reduce horizontal visibility, but its qualifiers of light, moderate, and heavy are defined by rainfall rate (not horizontal visibility).

Now that you know the types of conditions that can reduce visibility, let's take a look at visibility is displayed on the station model, which is covered in the Key Skill section below. Before you dive into that section, one thing to note is that non-precipitating obstructions to visibility are displayed as present weather on the station model only if the horizontal visibility is less than or equal to seven miles. Why seven miles? Typically, a radio beacon that aircraft use while landing called the outer marker (opens in a new window) lies four to seven miles away from the start of the runway. So, if there's an obstruction to visibility that prevents pilots from seeing the runway from the outer marker, then the obstruction must appear of the station model.

Meanwhile, when precipitation falls at an airport, it is always depicted on the local station model as present weather no matter how light or how little it affects visibility. This protocol exists because pilots always need to know when precipitation occurs at an airport not only because it restricts horizontal visibility, but because it also lowers the heights of cloud ceilings, both of which come into play during take-off and landing. To summarize when the obstruction to visibility symbol is displayed on a station model, review this flow chart (opens in a new window).

Key Skill...

A sample station model, with visibility and present weather annotated

A sample of a station model with a visibility of one-and-a-half miles because of moderate rain.
Credit: David Babb

Given that many of the primary weather stations are located at airports, horizontal visibility has a special place reserved on the station model. To locate horizontal visibility, look below and to to the left of the air temperature. The leftmost number (if present), represents the horizontal visibility reported in statute miles (one-and-a-half miles in the sample station model on the right). The symbol (again, if present) just below the temperature represents the present weather ("moderate rain" in the example on the right). Remember that present weather will always appear if it's precipitating, but it will only appear if it reduces visibility to seven miles or less when the obstruction to visibility is non-precipitating.

You should spend some time familiarizing yourself with all of these common symbols for present weather (opens in a new window), but you can also see the entire table of international symbols for present weather (opens in a new window) if you're interested (some of them rarely get used). I also recommend practicing with the interactive station model tool below. The default value for visibility in the tool is one-and-a-half miles (the far left number), but you can change the visibility on the station model by altering the "Visibility" field in the Current Conditions panel. Give it a try! Next, examine the "Obstruction to Visibility" pull-down list in the Current Conditions panel. The default weather for the tool is rain showers (a single dot with a downward-facing triangle). Experiment with the various observations to see the symbols that they produce and notice that you can select both precipitating and non-precipitating types of weather (as long as the visibility is less than seven miles).

Finally, change the visibility to, say, 10 miles and then note how the Obstruction pull-down menu changes. First of all, the option "none" becomes available since you are no longer required to report an obstruction to visibility. Secondly, the non-precipitation types of weather are removed from the list because these are only reported if the visibility is less than seven miles.

Explore Further...

Away from airports, pilots routinely report adverse flying conditions. Appropriately called Pilot Reports (PIREPs), these in-flight observations catalog turbulence, icing, and weather / sky conditions (website for PIREPs (opens in a new window)). In the United States, air-traffic controllers solicit pilot reports whenever any of the following are present or predicted for their area of responsibility: icing, turbulence, thunderstorms, wind shear, visibility lower than five miles, low ceilings, and volcanic ash. That's right...volcanic ash! As a dramatic example, in the spring of 2010, the Eyjafjallajökull volcano in southern Iceland (opens in a new window) erupted spectacularly (see photograph below), spewing large volumes of ash into the atmosphere (view of Eyjafjallajökull eruption from space (opens in a new window)) and temporarily brought commercial flights to a halt in the British Isles and other parts of Europe. That's because jet engines can fail when they ingest volcanic ash, which obviously poses a serious threat to aviation.

A tremendous ash cloud being produced by an Iceland volcano.

A striking photograph of the Eyjafjallajökull volcano erupting over southern Iceland on April 17, 2010.
mjg8

Making Do with Dew Points

Making Do with Dew Points

Prioritize...

Dew points are extremely useful, but also often misunderstood. When you finish this section, you should be able to relate dew points to water vapor concentration in the atmosphere as well as identify and interpret dew point on a station model. Dew points are at the heart of many water-related processes in the atmosphere (condensation, cloud formation, etc.), so we'll be building off of the fundamental concepts in this section later on.

Read...

Everyone will surely recognize that water is an important player in weather, so meteorologists must have weather variables that help them assess moisture. One such variable is dew point temperature. By definition, the dew point is the approximate temperature to which the water vapor (the gaseous form of water) in the air must be cooled (at constant pressure) in order for it to condense into liquid water drops. I emphasize here that dew point is a temperature, so it's typically expressed in degrees Fahrenheit or Celsius.

As it turns out, the dew point temperature is also an absolute measure of the amount of water vapor in the air. The higher the concentration of water vapor, the higher the dew point (and the lower the concentration, the lower the dew point). What constitutes "high" and "low" dew points? At the surface of the earth, the lowest dew points tend to be found during winter, in bitterly cold, dry air masses from the Arctic, where dew points can be well below 0 degrees Fahrenheit. On rare occasions, dew points in such air masses in the northern United States can drop to -50 degrees Fahrenheit or lower! On the flip side, the highest dew points tend to be found during summer in warm, moist, "tropical" air masses. In the summer, these air masses frequently have dew points above 70 degrees Fahrenheit. On occasion in the United States (usually for short periods of time), dew points can even rise into the low 80s, but extremely rarely climb higher than that. If you want to learn more about extreme dew points, check out the Explore Further section toward the bottom of this page.

The fact that dew point serves as an absolute measure of the amount of water vapor in the air sets dew point temperature apart from many of the other variables that describe moisture in the atmosphere. These other variables have their uses, but they also depend on other factors beyond just the amount of water vapor present. We'll talk more about some of these other variables later in the course. Moisture is a fairly complicated topic, so we're just going to scratch the surface for now!

To better understand dew point and its applications, we should start with the characteristics and behavior or water vapor. As mentioned above, water vapor is the gaseous form of water. You probably learned at some point that matter exists in three states -- solid, liquid, and gas. Well, water is one of the rare substances that can exist in all three states naturally in our atmosphere. Water's solid (ice) and liquid forms are evident all around us, but the gaseous form (water vapor) might not be so obvious. Just like other gases (oxygen, nitrogen, carbon dioxide, etc.) water vapor is invisible.

A consequence of this is that standard photographs really don't show water vapor, even if they claim to. For example, check out this image of a steaming tea kettle (opens in a new window). Within the effluent escaping from the spout, where is the water only in vapor form? Hint: It’s not in the part you can see. Although some water molecules are likely in the vapor state mixed within the visible “cloud,” the water that you can see is actually in the form of tiny liquid drops. If you look closely, there appears to be a gap between the tea kettle’s spout and the visible cloud (here's an annotated image of the tea kettle (opens in a new window)). This is where the water exists in a pure vapor state. In fact, this is only a portion of the effluent that is “steam,” or super-heated water vapor.

Colorful marbles

It is often helpful to think of air molecules as marbles. No one molecule can "hold on to" another.
Credit: Public Domain

Ultimately, water vapor behaves just like any other gas. On a molecular level, water vapor behaves just like oxygen, nitrogen, carbon dioxide, etc. Consider a situation where you had a box of “air” (containing all of the molecules normally found in the atmosphere). This is very much like having a box of various colored marbles. These marbles (because they have a lot of energy) are zooming around, bouncing off the sides of the box and each other. However, each marble is acting independently of the others. This means that in our box of air, the oxygen molecules are acting independently of (and oblivious to) other molecules – including water vapor molecules. The implication of all of this is: Air does not “hold” water vapor, and has no "holding capacity" for water vapor (which are common, but incorrect, phrases that are used to explain water processes). Air isn't like a sponge that can't absorb any more water once the pores of the sponge become filled with water. Indeed, all the air molecules in our box, combined, would only occupy a really tiny fraction of the space in the box, no matter what. So, there's always enough room for more water vapor molecules. We're going to expand on these ideas later in the course when we talk about topics like cloud formation, but I wanted to lay the groundwork for thinking correctly about water vapor now (it will help later on).

Now we need to discuss the processes by which water changes phase (namely to and from water vapor). When transitioning from a gas to a liquid, water undergoes a process called condensation. Likewise, when transitioning from a liquid to a gas, the process is called evaporation. We'll explore these (and other) “phase transitions” in more detail later in the course; however, at this point, I want to emphasize that evaporation and condensation events are taking place all the time, everywhere around you, even if you can't see them. Surprised? Allow me to illustrate.

Metal cup half full of cold water. Condensation on cup clearly shows water level.

Is the glass of cold water half full or half empty? You can tell by the "dew" on the outside of the glass.
Credit: David Babb

Take a look at the metal glass roughly half-filled with cold water on the right. The bottom of the glass is obviously coated with a layer of small liquid water drops (often called “dew”), while the top is not. Why is that? Molecules of water in the gas phase (water vapor) are zinging around in the air, but when a water vapor molecule strikes an object (like the side of the glass), it may “stick” (that is, condense on the surface). I say “may” because there is only some chance that the molecule is captured by the surface. If it does stick, then there is another chance that within some time frame, the molecule will become “unstuck” (that is, evaporate from the surface) and return to the gas phase. Thus, on all surfaces, there is a chance of condensation and a chance of evaporation for each gas molecule that encounters a surface, which means that we always have a rate of condensation and a rate of evaporation for every surface.

So, molecules are impacting (condensing) and leaving (evaporating) on both the top and bottom surface of our glass half-covered in dew. But, then why is the bottom covered in tiny liquid drops while the top remains dry? The answer lies in the fact that the rates of condensation and evaporation are not equal everywhere. On the bottom of the glass, the rate of evaporation is less than the rate of condensation; therefore, there is a net increase in liquid water (we say “net condensation”). On the top of the glass, the rate of evaporation is greater than the rate of condensation, meaning that there's a net decrease in liquid water (we say “net evaporation”). Since the glass is about half full of cold water, you might have guessed that temperature is playing a role here (and you're correct). The colder part of the glass has a lower evaporation rate, which allows tiny water drops to grow via net condensation (condensation occurs faster than evaporation does on this part of the glass).

Now with some background about water vapor's behavior, let’s revisit our definition of dew point temperature. We said that the dew point is the approximate temperature to which the water vapor in the air must be cooled in order for it to condense into liquid water drops, and that the dew point temperature is an absolute measure of the amount of water vapor in the air -- the higher the concentration of water vapor, the higher the dew point. Can you now see how these two ideas connect? If the air contains a high concentration of water vapor (dew points are high), then net condensation will occur at a higher temperature (that is, at a high dew point temperature). If water vapor concentrations are very low (dew points are low), then net condensation will not occur until the air is very cold (that is, at a low dew point temperature). If the dew point temperature is less than 32 degrees, the term frost point is, technically, more appropriate than "dew point" because frost (opens in a new window) will form (by a process called deposition, not condensation) instead of dew.

One final practical point about dew point. The higher the concentration of water vapor, the higher the dew point, and by itself, the dew point serves as an indicator of the way the air “feels” – whether it be dry or muggy. Since our skin temperature is regulated to some degree by evaporation of sweat, it would be logical that we would be affected to some degree by the dew point temperature. Certainly, describing how something “feels” can be a bit dicey in a science course because it’s a somewhat subjective topic, but examine the table below for a rough guide on how the air might “feel” based on dew point temperature.

A general level of human comfort versus various dew point temperatures.
Dew PointGeneral level of comfort
60 degreesFor most people, the air starts to feel a tad "muggy" or "sticky."
65 degreesThe air starts to feel "muggy" or "sticky."
70 degreesThe air is sultry and tropical and generally uncomfortable.
75 degrees or higherThe air is oppressive and stifling.

Now that you know some basics about dew point and the characteristics and behavior of water vapor, let's shift gears to looking at dew points on station models, which is covered in the Key Skill section below.

Key Skill...

See caption.

A sample of a station model with dew point (46 degrees Fahrenheit) annotated.
Credit: David Babb

Finding the dew point on a station model is fortunately much simpler than the details of how water vapor behaves! The number located in the lower-left corner of the model is the station dew point in degrees Fahrenheit (or Celsius, depending on the country of origin). In the case of the station model on the right, the dew point temperature is 46 degrees Fahrenheit.

I also encourage you to check out the interactive station model tool below. The tool defaults to a dew point temperature is 63 degrees Fahrenheit, but feel free to alter the dew point temperature (using the input field on the right) and see how the station model changes. You can also check out the most current surface observations (opens in a new window), and pick out three or four station models. You should be able to identify and interpret the dew point at each. By this point, you should be familiar with all the numbers and symbols (temperature, dew point, visibility, and present weather) on the left-hand side of a station model!

Explore Further...

Extreme Dew Points

The region of the world with the highest dew points is near the Persian Gulf (opens in a new window) in the Middle East, where dew points in the summer can exceed 90 degrees Fahrenheit on occasion. Such high dew points correspond to some of the highest water vapor concentrations on Earth! Extremely high dew points in the United States can't quite match those numbers, but they can come close! For an example of the upper-limits that dew points can reach, check out (below) the 01Z analysis of surface dew points on July 20, 2011 (the evening of July 19), and note the readings in the low 80s in North Dakota (the small, darker-green pocket). Indeed, the 00Z station model observations on July 20th (opens in a new window) show numerous dew point readings over 80 degrees throughout North Dakota and western Minnesota. Meanwhile, at a local observing station at Moorhead, Minnesota (not shown on the map), the dew point climbed to an incredible 88 degrees Fahrenheit, setting the all-time record for the highest dew point ever recorded in the state!

Contour map of dew point temperatures showing dew points above 80F over North Dakota.

The 01Z analysis of surface dew points on July 20, 2011 (the evening of July 19). Note the small, darker green pocket of dew points higher than 80 degrees in North Dakota.
Credit: WW2010, University of Illinois

Such extremely high dew points typically develop from a combination of factors. In this case, strong winds from the south all throughout the Great Plains brought moist air northward, all the way from the Gulf of Mexico. This region also experienced strong storms just the night before, leaving the ground saturated with moisture (which was evaporating during the heat of the day, adding water vapor molecules to the air). Finally, this was the height of the growing season so plants were strongly transpiring (opens in a new window), adding yet more water vapor to the air.

Turning our attention to the lower end of the observable dew point scale, check out this station model plot from 11Z on a bitterly cold January day (opens in a new window). Notice the -47 and -45 degree Fahrenheit dew points located over northern Minnesota. That's some really dry air, folks! While such low dew points are rare for the continental United States, it is easier to find similar readings in the source region of these Arctic "chunks" of air (as in this station model plot for Alaska (opens in a new window)). Notice the extremely low dew points in the interior of Alaska and the Yukon Territories of Canada -- there's even a -50 degree Fahrenheit reading! Such low dew points are more common at these latitudes because low evaporation rates over bitterly cold ice- and snow-covered grounds mean that very few water vapor molecules enter the air.

mjg8

Considering Clouds (and Slicing Pie)

Considering Clouds (and Slicing Pie)

Prioritize...

Pay particular attention to the table at the bottom of the reading section. You should be able to describe all of the terms in the table and be able to interpret cloud cover in a station model observation. Also, make sure that you understand the conditions that dictate when the observation "sky obscured" must be used.

Read...

Let me start with the age-old question: "Which phrase do you think describes a cloudier sky -- partly sunny or partly cloudy?" The answer to that question might depend on who you ask. The National Weather Service defines partly sunny (opens in a new window) and defines partly cloudy (opens in a new window) as essentially the same, with the caveat that we wouldn't use "partly sunny" at night, of course. But, in practice, some forecasters use these terms differently because the word "partly" is somewhat vague, so it's not clear-cut. Some folks use "partly sunny" to emphasize that there will be a bit more clouds than sun, and use "partly cloudy" to emphasize that there will be a bit more sun than clouds. With this usage, a partly sunny day is actually cloudier than a partly cloudy day.

Most weather forecasters don't want to get drawn into such an argument of semantics, so when it comes to quantifying the coverage of the sky by clouds, they rely on a specific "pie-chart" system that leaves little room for debate (see table below). The "pie" that makes up the sky coverage observation is divided into eight sections. Clear conditions (0/8 cloud coverage) constitute a perfectly sunny sky, while "overcast" conditions (8/8 coverage) constitute a completely cloudy sky. Those two are pretty straightforward. In between those two extremes, a "few" clouds (1/8 to 2/8 coverage) represent mostly sunny (or mostly clear) conditions. "Scattered" clouds (3/8 to 4/8 cloud coverage) correspond to a partly cloudy or partly sunny sky, with "broken" clouds (5/8 to 7/8 cloud coverage) describing a partly cloudy or partly sunny (5/8 coverage) to mostly cloudy (6/8 to 7/8 coverage) sky. When the sky is nearly overcast except for a few breaks, forecasters refer to the cloud coverage as breaks in the overcast (abbreviated as "BINOVC"). This photograph shows an example of BINOVC conditions (opens in a new window) (note the patches of blue sky toward the bottom left of the photo in an otherwise overcast sky). When the sky is broken or overcast, weather observations will include the corresponding cloud ceiling, which is simply the height of the base of a broken or overcast layer of clouds.

Official sky coverage categories (and fractional coverage measures) versus plain-language sky descriptions.
Official Sky Cover CategoriesFractional CoveragePlain-Language Descriptions
CLEAR0/8Sunny (or clear)
FEW (opens in a new window)1/8 - 2/8Mostly Sunny (or mostly clear)
SCATTERED (opens in a new window)3/8 - 4/8Partly Cloudy or Partly Sunny
BROKEN5/8 - 7/8Partly Cloudy or Partly Sunny (opens in a new window) (5/8) to Mostly Cloudy (opens in a new window) (6/8 or 7/8)
OVERCAST8/8Cloudy (or overcast)
SKY OBSCURED(no fraction)The weather observer can't determine the coverage or ceilings of clouds because near-surface conditions (such as dense fog, heavy rain, blowing snow, smoke, etc.) obscure the sky.

On occasion, the sky cover cannot be seen due to near-surface conditions such as dense fog, heavy rain, blowing snow, etc. For example, check out this webcam shot of Penn State's Beaver Stadium in dense fog (opens in a new window). You can't really see the stadium, and you can't really see the sky, either! In such cases when the observer cannot determine the sky coverage, the condition "sky obscured" is reported. Note: Even if the observer is fairly confident that the sky is overcast, if the ceiling cannot be observed, "sky obscured" would still be reported (also note that the observation is "sky obscured," NOT "sky obstructed" -- a common mistake). Also, when sky obscured conditions exist and vertical visibility is very low, you'll sometimes see references to an indefinite ceiling. This simply means that the near-surface conditions (such as dense fog, blowing snow, etc.) have limited the vertical visibility to the point that the cloud ceiling can't be determined.

A sand storm approaching an Army base in Iraq.

A massive sandstorm struck a military base near Al Asad, Iraq, on April 28, 2005. If you were taking a weather observation within the wedge of dust at this time, you would not have been able to determine the cloud ceiling because airborne sand would have obscured the sky. In this case, you would have reported sky obscured with an indefinite ceiling (very low vertical visibility).
Credit: U.S. Army

I should add that thick haze and smoke can also obscure the sky, preventing weather observers from assessing the specific fraction of cloud cover. Thick smoke, for example, often obscures the sky in the vicinity of major wildfires, such as in this striking photograph of the Pine Gulch Fire (opens in a new window) (Credit: Public Domain) in Colorado in 2020. Now that you know the conventions for reporting sky coverage, let's take a look at how to identify and interpret sky coverage on a station model in the Key Skill section below.

Key Skill...

See caption.

A sample station model with sky coverage labeled. In this case, the sky was mostly cloudy with 6/8 cloud coverage.
Credit: David Babb

Interpreting sky coverage on the station model is fairly intuitive, as the circle in the station model serves as the "pie chart" that shows the cloud coverage. The greater the cloud coverage that exists, generally the larger the portion of the circle that is filled in. In the sample station model on the right, the circle is 75 percent filled in, corresponding to a "mostly cloudy" sky with 6/8 cloud coverage.

I also strongly recommend practicing with the interactive station model tool below. The tool defaults to 6/8 sky coverage, but change the sky coverage in the appropriate pull-down menu located in the Current Conditions panel and observe the change in the station model. Make sure you explore how fractions like 3/8 and 5/8 cloud coverage are depicted (as they might not be quite what you were expecting). Finally, when "sky obscured" is the observation, what does the station model look like? The "X" in the sky coverage circle is the formal designation that the sky is obscured, meaning that near-surface conditions (such as those discussed earlier on this page) prevent the weather observer from observing the sky coverage. Make sure you become fluent in reading the sky coverage "pie chart" on the station model!

mjg8

Probing Pressure

Probing Pressure

Prioritize...

After completing this section, you should be able to describe atmospheric pressure, the typical units of pressure that meteorologists use, and the typical range of sea-level pressures observed on earth. By applying this knowledge, along with the guidance in this section, you should be able to decode sea-level pressure from the station model.

Read...

"Pressure...pushing down on me, pressing down on you..."

Those lyrics come from the song "Under Pressure (opens in a new window)" by Queen (featuring David Bowie) from 1981. As we start our investigations of pressure, we have to start with the basics. For starters, what is pressure? On that matter, Queen basically nailed it. It's a force that pushes down on me and you (and everything else), although pressure isn't easy for us to "feel" with our human senses except under certain circumstances. You've probably noticed the impacts of pressure if your ears have popped while driving up or down a mountain, or if you experience discomfort when air pressure decreases as a storm approaches (as many folks with arthritis or bursitis experience).

Meteorologists are concerned about atmospheric pressure, which is the pressure exerted by air molecules, and you may recall from a high school science class that pressure is defined as a force per unit area. In a more practical sense, the pressure exerted by air molecules at a weather station is approximately the weight of the air in a column that extends from a fixed area on the ground to the top of the atmosphere. At sea level, the weight of a column of air on one square inch of area is roughly 14.7 pounds, resulting in an air pressure of 14.7 pounds per square inch. For perspective, that amounts to a total force of more than two tons on just the area covered by a single base on a baseball field (an 18-inch by 18-inch area). Surprised?

Meteorologists typically don't work with pressure in pounds per square inch, however. For example, many home barometers (opens in a new window) (instruments for measuring atmospheric pressure) measure pressure in inches of mercury, which are based on the mercury barometer (opens in a new window). Mercury barometers measured pressure after air was evacuated from a glass tube, and the open end of the tube was immersed in a reservoir of mercury, allowing air pressure to force mercury to rise in the glass tube. At sea level, the standard height of the mercury column is 29.92 inches (76 centimeters). More commonly, meteorologists often work with pressure in units of millibars (abbreviated "mb"). For reference, an atmospheric pressure of 14.7 pounds per square inch (when the height of a mercury barometer would be 29.92 inches) is equal to about 1013 millibars.

A satellite image that illustrates the relationship between clouds and surface pressure.

On this image from space, a large shield of clouds marks the domain of a moderately strong low-pressure system off the Middle Atlantic Seaboard, while high pressure fosters mainly clear skies over the Gulf States.
Credit: NOAA

The connection between surface pressure and the weight of a column of air that extends above the surface has many important consequences. For starters, processes that reduce the weight of an air column also act to decrease the surface pressure. On the other hand, processes that add weight to air columns act to increase surface pressure. Evolving horizontal patterns of air pressure are crucial to weather forecasting, which is one of the reasons why forecasters pay such close attention to centers of highest and lowest pressure on weather maps (typically marked by a blue "H" and a red "L", respectively). In a very general sense, low-pressure systems tend to bring inclement weather (clouds and precipitation), while high pressure systems tend to bring "fair" weather (sunshine and relatively calm conditions).

The bottom line here is that when you hear meteorologists refer to a "low pressure system," they are really talking about is a "lightweight." In other words, the air column above the center of a low weighs less than any of the surrounding air columns. On the flip side, a high pressure system is a "heavyweight" because the air column above the center of the high weighs more than any of the surrounding air columns. Now, I should point out that the difference in pressure between a run-of-the mill high-pressure system and a pretty strong low-pressure system is only about five percent. In the image on the right, for example, the difference between the labeled high and low is only 32 millibars (1018 millibars - 986 millibars), so the difference was even less than five percent in this case. Still, these differences have very important consequences for the weather, as you'll learn!

To give you an idea of the range of sea-level pressures across the world, the average sea-level pressure (computed over the entire earth over a long period of time) is roughly 1013 mb. A very strong high pressure system in the winter may measure around 1050 millibars. On the other hand, a representative value for sea-level pressure at the center of a fierce low-pressure system that can cause, for example, heavy snow during winter might be in the neighborhood of 960 to 980 mb.

An artificial barograph trace showing typical and extreme sea-level pressure values.

This artificial trace of sea-level pressure (formally called a barograph trace) gives you a sense of the range in sea-level pressure readings associated with notorious low- and high-pressure systems. For sake of comparison, the barograph trace includes markers for average sea-level pressure and typical values for generically strong high- and low-pressure systems. In case you're wondering, a barograph is a recording aneroid barometer (opens in a new window) invented by Lucien Vidie, a French engineer, in 1843. Check out a photograph (opens in a new window) of a barograph in action.
Credit: David Babb

As a general guideline, nearly all sea-level pressures lie between 950 millibars and 1050 millibars, with most pressure readings falling between 980 and 1040 millibars. Narrowing down the field even further, sea-level pressures often tend to cluster closer to 1013 mb.

There are exceptions, of course. The bottom of the observed range of sea-level pressures is populated by the "kings" of all low-pressure systems on our planet -- hurricanes (called "typhoons" in some parts of the world). Very intense hurricanes can have sea-level pressures down near 900 millibars. In 2017, for example, at its peak intensity, Hurricane Maria (opens in a new window) had a minimum sea-level pressure of 908 millibars. The storm later went on to devastate Puerto Rico, and its fierce winds completely destroyed the island's NEXRAD Doppler radar (this short video highlights Maria's damage to Puerto Rico (opens in a new window), and includes some stunning images of the damage to the radar, if you're interested). A handful of hurricanes and typhoons globally have even had sea-level pressures drop a bit below 900 millibars. On the other extreme, the kings of high-pressure systems that occasionally form over Siberia during the throes of Arctic winter can attain maximum sea-level pressure readings above 1050 or even 1060 millibars.

Ultimately, the pressures associated with very intense hurricanes and very strong high-pressure systems in the winter are pretty rare, so we can use the general guideline above (that nearly all sea-level pressures lie between 950 millibars and 1050 millibars) to help us interpret pressure data from various maps. With that in mind, let's turn to this section's Key Skill -- decoding sea-level pressure on the station model.

Key Skill...

See image caption.

A sample station model with sea-level pressure and the three-hour pressure tendency highlighted.
Credit: David Babb

Because air pressure plays such an important role in determining the type of weather we might experience, it's no surprise that it has a place on the station model. But, interpreting pressure on a station model is not quite as straightforward as the other variables we've covered. To see the pressure information displayed on a station model, check out the image on the right. The three digits listed in the upper right on the station model represent the sea-level pressure, while the two digits below represent the three-hour pressure tendency (change in pressure over the previous three hours), which is not always reported. For now, we're going to focus on the sea-level pressure value in the upper right (we'll deal with pressure tendency later on).

The three digits in the upper-right-hand corner of the station model represent the last three digits of the station's sea-level pressure, expressed to the nearest tenth of a millibar. Thus, to decode the pressure reading, you must first add a decimal in front of the right-most digit. Then you need to place either a "9" or a "10" in front of the three digits. How do you decide whether a "9" or a "10" should go in front of the three digits? This is where knowing the typical range of sea-level pressures is helpful. Remember that nearly all values of sea-level pressure are between 950 millibars and 1050 millibars (unless you're dealing with an intense hurricane or an extremely strong Arctic high in winter). So, in the example on the right, we must need a 10 in front of the 046 to give 1004.6 millibars (opens in a new window). Placing a "9" in front would have given 904.6 millibars, which wouldn't make sense (unless an extremely intense hurricane was right near the station).

Based on statistical distributions of sea-level pressure, if the three digits you see on the station model are less than "500," you'll typically place a "10" in front of them, while if the three digits are greater than "500," you'll typically place a "9" in front of them. In most cases, you want to choose whichever will give you a sea-level pressure between 950 mb and 1050 mb. As mentioned above, some exceptions exist, but the exceptions are rare. Still, if you are dealing with a strong hurricane or a burly high-pressure system from the Arctic, these guidelines might break down, so forecasters must be aware of the general weather pattern when decoding pressure.

I recommend practicing with the interactive station model tool below. The tool defaults to a sea-level pressure of 1004.6 millibars ("046"), but you can change the value in the "Current Conditions" panel on the right. For example, type in pressures of 999.6 mb, 986.2 mb, and 1028.9 mb and see how they appear on the station model. Practice decoding some random 3-digit coded pressures (decode "953", "069", and "395", for example) and check your answers with the tool by typing your answer into the "Current Conditions" panel and see if the station model displays the 3-digit code that you started with.

Quiz Yourself...

Ready to check your skill at decoding pressures from a station model? Use the quiz below to practice. If you can get at least 9 out of 10 on the quiz, you've likely got the hang of it! Make sure to note if "special circumstances" apply in each question, and good luck! You're welcome to try as many times as you would like.

Explore Further...

In our discussion of pressure, I repeatedly referred to "sea-level pressure," even though most land areas on earth do not lie at sea level. Why make that distinction? Well, in order to analyze the horizontal patterns of surface air pressure that govern weather, meteorologists require a "level playing field," and that's why they're interested in "sea-level pressure."

The skyline of Denver, Colorado, with the Rocky Mountains in the background.

Given that Denver, Colorado, lies an altitude of roughly 5300 feet, the surface pressure often flirts with 850 mb, even on days when skies are clear.

What do I mean by that? To illustrate, I kept tabs on pressure readings with the barometer on my cellphone during a trip into the Rocky Mountains, just west of Denver, Colorado, including a trip up the highest paved road in North America to Mount Blue Sky (formerly Mount Evans) (opens in a new window). Upon reaching the summit, the barometer app on my phone read 613.07 hectopascals (opens in a new window) (equal to 613.07 millibars), and this wasn't a faulty observation! This chart of mean station pressure for the United States (opens in a new window) shows very low pressures in the Rocky Mountains (less than 780 millibars in some areas), on average. Is there some kind of monster low-pressure system permanently parked in the Rockies? Of course not! The station pressures are always low there because of the high elevations in the Rockies (we'll explore this relationship later). The dramatic variation in station pressure based on elevation makes it virtually impossible for meteorologists to use station pressure to track centers of high and low pressure. Regardless of the strength and position of various high- and low-pressure systems, the map of station pressure would always show the lowest pressures in the highest-elevation regions. So, in order to level the playing field, meteorologists adjust station pressure to sea level.

Meteorologists "correct" the station pressure to sea level by estimating the weight of an imaginary column of air that extends from the station to sea level. The surface temperature at the location is used to compute a representative density of the imaginary column, which when combined with the station altitude is then converted to a column weight. In turn, this estimated weight of the imaginary air column converted into a pressure adjustment that gets added to the observed station pressure. This results in the adjusted sea-level pressure that you see displayed on the station model. This schematic of the adjustment process (opens in a new window) may help you visualize how it's done.

mjg8

Watching the Wind

Watching the Wind

Prioritize...

When you've finished this section, you should be able to describe wind direction in both words (like "west," "southwest," etc.) and compass degrees, and determine the wind direction and speed on a station model (including proper units). Please note that wind direction in particular is an important concept that often gives students some trouble, so make sure that you don't leave this section without mastering this skill.

Read...

A wind vane and rotating cup anemometer.

A wind vane and anemometer (used to measure wind speed).
Credit: David Babb

Wind is a weather variable that's pretty easy to notice, from a gentle breeze on a summer day to whipping winds that can cause damage during a storm. Really, wind is just about everywhere -- even in music! Wind has captured the attention of songwriters for years, with numerous songs referencing "wind" in some way ("Blowin' in the Wind (opens in a new window)," "Candle in the Wind (opens in a new window)," "Summer Breeze (opens in a new window)," and "Dust in the Wind (opens in a new window)" are but a handful of examples).

But, just what is the wind? In short, wind is the horizontal movement of air. One of the most fundamental rules that you need to know is that the direction of the wind is always expressed as the direction FROM which the wind blows and NOT the direction toward which the wind blows. Make sure to commit that to memory! So, if the wind blows from the north toward the south, for example, you'll hear a meteorologist say that the wind is "northerly" (or there's a "north" wind), NOT a "southerly" or "south" wind. Meteorologists are always interested in where the air is coming from because it can help with weather forecasting. For example, if a wind is blowing from a region of warm air toward a region of colder air, a weather forecaster would want to know that! If you happen to own a weather vane, remembering this rule should be easy because a wind vane points into the wind and thus toward the direction FROM which the wind blows.

So, wind direction is always the direction from which the wind is blowing. While forecasters commonly brand the wind with a general direction (such as "north" or "southeast"), in practice, they routinely use standard compass angles to fine-tune the wind direction, as shown in the compass below. For sake of illustration, the wind direction from the north blows from a direction of 0 degrees. A wind that blows from the east is a 90-degree wind, while a wind direction of 70 degrees corresponds to a wind that blows from the east-northeast.

A sample station model plot.

Weather forecasters use standard compass angles to describe the specific wind direction. For example, a 270-degree wind would be blowing from the west (a "west" or "westerly" wind), while a 180-degree wind would blow from the south (a "south" or "southerly" wind).
Credit: David Babb

Wind speed is simply how fast the air is moving, and it is the sustained wind speed that is routinely included in weather observations. What is "sustained" wind speed? It's the wind speed averaged over a certain time period (usually 1 or 2 minutes). The rotating cup anemometer shown near the top of the page is a popular instrument for measuring sustained wind speed at home weather stations. To determine the sustained wind speed, the revolution rate of a rotating cup anemometer is typically averaged over a one- or two-minute time period and then mathematically converted to a speed.

The wind is sometimes unsteady, however, with brief, sudden increases in wind speed called gusts. As a general rule, gusts last less than 20 seconds. Weather observers typically only report gusts when the wind speed varies by greater than 10 knots (between the peaks and lulls), so wind gusts are only included in routine weather observations when they're noteworthy.

The units of "knots" may not be familiar to you; in the United States, we often talk about wind speed in mile per hour (just like automobile speed). But, in routine weather observations, wind speed is actually expressed in units of knots (nautical miles per hour). For the record, 1 knot = 1.15 miles per hour. To convert between knots and the more familiar "miles per hour," multiply knots by 1.15. You can find many wind speed converters (opens in a new window) online, but if you have to make the conversion in your head, it's much like if you were out at a restaurant and wanted to leave a 15 percent tip. Imagine your bill at the restaurant is $25. To leave a 15 percent tip, first multiply your bill by 10 percent, which gives you $2.50. Then add on half of $2.50 (which is $1.25) to get to your 15 percent tip of $3.75. Your total bill, then, is $25 + $3.75 = $28.75. Converting knots to miles per hour is the same as computing your tip and total bill at a restaurant. A 25-knot wind speed converts to 28.75 miles per hour (which we get by multiplying 25 by 1.15).

Now that we've covered the basics of wind speed and direction, you might be wondering, "What if the wind is calm? What's the wind speed and direction?" Technically, if the wind is calm, then its speed is 2 knots or less (which gets reported as 0 knots) and it does not have a reported direction. Keep these ideas in mind as you concentrate on this section's Key Skill -- determining wind speed and direction from a station model.

Key Skill...

See caption.

A sample of a station model with wind direction and wind speed labeled. In this case, winds were blowing from the southeast (or more precisely, 150 degrees) at 15 knots. The long wind barb represents 10 knots, while the short barb represents 5 knots for a total of 15 knots (17 miles per hour).
Credit: David Babb

Wind speed and direction are prominently displayed on the station model. To see the wind information displayed on a station model, check out the image on the right. On a station model, the thin, solid line (often referred to as the "flag") extends outward from the sky coverage symbol in the direction that the wind is blowing from. In this case, it's apparent that the wind is blowing from the southeast (we would say we have a "southeast" or a "southeasterly" wind). More precisely, we could say that winds were 150 degrees (you may want to refer to the image of standard compass angles (opens in a new window) to confirm).

What about wind speed? On station models, the speed of the wind is expressed as a series of notches, called "wind barbs" on the clockwise side of the line representing wind direction. Each longer wind barb counts as a tally of 10 knots (actually, each longer barb represents a speed of 8 to 12 knots, but weather forecasters operationally choose the middle value of 10 knots for simplicity). The shorter barbs count as a tally of five knots. So, to figure out the wind speed, you need to add the values associated with any long and short wind barbs present. In the sample station model on the right, there's one long barb (10 knots) and one short barb (5 knots), so we add 10 knots and 5 knots together to get our wind speed of 15 knots (which converts to 17 miles per hour).

If the surface wind is calm, a larger circle is drawn around the circle that represents sky coverage, as shown in the example map of station models over the western United States below. The two stations I've highlighted (Havre and Glasgow, Montana) were both reporting calm winds.

A station model plot for the interior northeast United States, showing a few stations with calm winds.

Stations with calm winds have a larger circle drawn around the sky coverage circle, as shown at the two highlighted stations -- Havre and Glasgow, Montana.
Credit: NOAA

On the other hand, for very strong winds, a "triangular" barb counts as a tally of 50 knots. The use of the 50-knot symbol doesn't happen at the surface very often in most locations, however, because sustained winds rarely reach such speeds. Of course, wind gusts of 50 knots occur a little more frequently (severe thunderstorms, strong cold fronts, etc.). You're more likely to observe a sustained 50-knot wind near the Atlantic and Gulf Coasts with a hurricane nearby. For example, check out the sustained 50-knot wind at Apalachicola, Florida (opens in a new window) at 17Z on October 10, 2018. The culprit in this case was Hurricane Michael (opens in a new window), which was about to make landfall in the Florida Panhandle.

Want to see a few examples of interpreting wind direction and speed using the interactive station model tool? Check out the short video (2:25) below:

This short video should help reinforce conventions relating to wind speed and direction on station models.

For starters, always remember that wind direction is expressed as the direction that the wind is blowing from. So, on our compass here, a “west” or “westerly” wind would blow from 270 degrees like this. A “north” wind or “northerly” wind would blow from 0 degrees like this. A 130 degree wind blows from the southeast like this, and would be called a “southeast” or “southeasterly” wind.

Now let’s apply those ideas to the station model. We’ll assume that north is that the top of the image, south is at the bottom, west is on the left, and east is on the right. The tool defaults to a wind from 180 degrees, so the wind is blowing from the south.

We can change the wind direction to, say, 50 degrees. Now we have winds from the northeast to the southwest, and that's what it would look like on the station model. We would call this a northeast wind, or a northeasterly wind. Of course, we can also tell wind speed from a station model. The speed here is 25 knots, as indicated by the two long wind barbs and one short wind barb. Each long wind barb represents 10 knots, and the short wind barb represents 5 knots. So we sum those together, and we get a total of 25 knots.

If we had calm winds, or a wind speed of 0 knots, we would just have an extra circle around the sky coverage because the wind doesn't have a direction or speed.

Or, on the other hand, we could make it really windy, and have 75-knot sustained winds –say maybe a hurricane is making landfall nearby. The pennant, or triangular barb, represents 50 knots, the 2 long wind barbs represent 10 knots each, and the short barb represents 5 knots. Add those together, 50 + 10 + 10 + 5, to get our total of 75 knots.

Credit: Penn State

Finally, I highly recommend practicing with the interactive station model tool below. The tool defaults to a 180º (south) wind at 25 knots, but you can experiment with different wind directions by entering different compass directions into the "Current Conditions" field to see how they would be represented on the station model (remember the wind direction is represented by the flag stick). Try a 220-degree wind, a 90-degree wind, and 340-degree wind for starters. You can also try out different wind speeds and examine the resulting group of wind barbs (remember that a long barb counts for 10 knots, a small barb for 5 knots, and a black triangle for 50 knots). Try a 10-knot wind, a 35-knot wind, and a 60-knot wind for starters. Don't forget to try an observation with calm winds, too!

One last thing to keep in mind. Remember that station models report sustained wind speeds. Reported wind gusts often do not appear on station models, but if they do, you might see something like "G28" near the wind barbs, which would indicate gusts to 28 knots (the interactive tool does not show gusts).

Quiz Yourself...

Think you have a good handle on wind speed and direction on a station model? Take this self-quiz below to see how you do. Begin by hitting the "Quiz me" button. Fill in the missing wind direction and speed, and then hit "Submit" to check your answer. Wind direction can be rounded to the nearest 10 degrees and wind speed is to the nearest 5 knots. You may also turn on some directional hint lines if you have trouble estimating angles. Since some visual estimating is involved with wind direction, if your answer is only 10 or 20 degrees off from the tool's answer, that's a reasonable estimation. If you can get five in a row, you've likely got the hang of it!

Explore Further...

Why do meteorologists bother detailing wind directions with compass degrees instead of just saying things like "northeast" winds? If the wind is from the northeast (or any other general direction), do the specifics really matter? They certainly can! Slight changes in the wind direction can translate into large changes in the weather forecast.

For example, suppose it's December along the Northeast Seaboard. At this time of year, sea-surface temperatures over the offshore waters of the Atlantic are typically in the 40s (Fahrenheit). Thus, the temperatures of the air overlying Atlantic waters are often higher than air temperatures over the colder land. Now suppose a storm system approaches New York City and the wind direction at Central Park is 20 degrees (depicted on the left below). Such a north-northeast wind would bring cold air into the Big Apple strictly via a land route, which, as you might guess, increases the chances of snow. If the wind direction were 70 degrees, however (meaning that the trajectory of the air comes into New York City from the Atlantic, depicted on the right below), milder air might make a change to rain more likely.

Two images showing the effect on wind direction for temperatures of Long Island, New York.

During winter, a wind with a trajectory over land heightens the risk of snow at New York City (left), while a trajectory over water favors a changeover to rain (right).
Credit: David Babb

As we go deeper into the course, the idea that meteorologists are interested in where the air is coming from will come up again and again, because it can have impacts on temperature, moisture, etc. So, keep "air trajectories" (where the air is coming from) in your mind going forward. They're an important part of forecasting!

mjg8

Station Model Review

Station Model Review

Prioritize...

This page provides a quick review of some major topics from the lesson -- primarily, how the atmospheric variables we covered in this lesson appear on the station model. You'll need to be able to decode all of these parts of the station model throughout the course.

Read...

We've covered the primary atmospheric variables that weather forecasters keep tabs on, as well as how they appear on station models. This short video (4:46) shows a couple of examples of decoding station models in very different extreme weather situations, and serves as a "one-stop shop" for the parts of the station model that we covered throughout the lesson -- temperature, dew point, winds, present weather, sea-level pressure, etc. The video includes a couple of time conversions from UTC for good measure, too.

Let’s take a tour of a complete station model using our interactive station model tool. I’ve set it up with data from a real weather observation in the midst of blizzard conditions occurring at Findlay, Ohio on a December day at 1240Z. Ohio is in the Eastern Time Zone, so to get local standard time, we subtract 5 hours, meaning that this was an observation from 7:40 AM Eastern Standard Time on this date. And, the conditions at this time were brutal. The temperature, which is the number in the upper-left corner of the station model, was -6 degrees Fahrenheit. The dew point, was -9 degrees Fahrenheit. So, this was bitterly cold, dry air in place. Winds were whipping from the west-southwest, or 250 degrees, at 35 knots. To get the speed, we just add the 3 long barbs at 10 knots a piece, and the one short barb, which is 5 knots. The four asterisks, or snow flakes, indicate that heavy snow was falling, and the combination of heavy snow and fierce winds had reduced visibility to ¼ of a mile, which is indicated by the number on the far left. Sky coverage is depicted in the circle, and the “X” here indicates that the sky was obscured, meaning that the state of the sky could not observed. Given that heavy snow was occurring, we can guess that it was probably overcast, but since the state of the sky couldn’t be observed due to the wind driven, heavy snow, the official observation is obscured. Finally our sea-level pressure is indicated by these three digits at the top right – 7-5-3. But, that doesn’t indicate a pressure of 753 millibars. That would be much lower than any sea-level pressure ever observed. This is expressed in tenths of a millibar, and we have to put either a 9 or a 10 in front to get the proper sea-level pressure. Since our number here is greater than 500, we’ll put a 9 in front, and when we place a decimal in front of the 3, that gives us a pressure of 975.3 millibars – a pressure consistent with what we might find in a strong winter storm. Putting a 10 in front wouldn’t have made sense because it would have given us a pressure of 1075.3 millibars, which would be one of the highest values ever recorded. Remember that the vast majority of sea-level pressure values fall between 950 and 1050 millibars, with very intense hurricanes and extremely strong high pressure systems being exceptions.

Now we’ll look at an example from a warmer time in Tallahassee, Florida as the center of a hurricane passed about 50 miles to its east. This observation was taken at 1353Z on an August day, so Daylight Saving Time was in effect. Tallahassee is in the Eastern Time Zone, so to convert to Eastern Daylight Time, we subtract 4 hours instead of 5 like we would if standard time was in effect. 1353Z – 4 hours gives us 9:53 AM Eastern Daylight Time. The temperature is 76 degrees Fahrenheit, marked in the upper-left, and the dew point is 73 degrees Fahrenheit, marked in the bottom left. The present weather is marked by these 3 dots, which indicate moderate rain, and the visibility was 7 miles at this time. Winds were blowing from the northwest, or 330 degrees, and the speed was approximately 25 knots, which we can get by adding the two long wind barbs, which indicate 10 knots each, and the short wind barb, which is 5 knots. Moving on to sea-level pressure, we see “958” in the upper-right of the station model. Remember that’s in tenths of a millibar and we have to choose either a 9 or a 10 to put on the front of the number. Since the number is greater than 500, we’ll choose a 9, for a pressure of 995.8 millibars after placing a decimal in front of the 8. Had we chosen a 10, that would have given us 1095.8 millibars, which would be higher than any sea-level pressure ever recorded on earth, so that wouldn’t make sense. Only a 9 makes sense to give us a number that falls within our typical range of pressures. And, even though this was a hurricane case, the pressure was still within the typical range because only the most intense hurricanes have sea-level pressures that fall outside of the typical range, and Tallahassee didn’t experience the lowest pressures near the center of this hurricane anyway.

Credit: Penn State

This wraps up the required part of our lesson. The remaining section is optional, and takes a look at the raw observation code used for transmitting weather observations (it's where the data plotted on station models comes from). If you're interested, check it out!

mjg8

METARs (Optional)

METARs (Optional)

Prioritize...

This section is optional! Learning to read coded weather observations is a key skill for aspiring meteorologists and folks who are going to be working with real-time weather observations, so if you are planning on continuing with other meteorology courses in the future, I recommend at least familiarizing yourself with what a METAR observation is, what they look like, and what type of observation data they contain.

You will not be assessed on any of the information presented on this page.

Explore Further...

If you are ever going to be working with real-time weather data (perhaps to see how your forecast is faring), you are going to need to decode METAR observations. For the record, METAR is a French acronym that loosely translates to "aviation routine weather report" and is an internationally coded weather observation at an airport. Because of their coded nature, METARS require a bit of practice to read, but by looking at raw METARS, you can glean much more information than the standard decoded observations show on a station model. For reference, I refer you to Chapter 12 of the Federal Meteorological Handbook (opens in a new window) that serves as the "bible" of encoding METARS.

Okay, let's get right to it and decode the METAR below. While "METAR" loosely translates to "routine aviation weather observation," your may see a report beginning with "SPECI," which translates to a special (unscheduled) report.

METAR KCON 131151Z AUTO 09009KT 1 3/4SM +RA BR OVC010 09/07 A3005 RMK AO2 CIG 007V013 SLP177 P0015 60056 70066 T00890072 10094 20089 53018

KCON is the four-character ICAO (International Civil Aviation Organization (opens in a new window)) identifier for Concord, New Hampshire. You can use the station list (opens in a new window) at the National center for Atmospheric Research to help you decipher any identifier.

131151Z - the observation was taken on the 13th (May, 2006) at 1151Z (you always determine the month and year in the context of real time).

AUTO indicates a fully automated report with no human intervention. If an observer takes or augments observations, this tag does not appear. Sometimes you might see COR, which indicates a corrected observation.

09009KT indicates that the wind blew from 90 degrees (an easterly wind) at 9 knots. What happens if winds are gusty? Let's look at a METAR below from Mount Washington in New Hampshire (see photograph below) at the same time as the first METAR from Concord):

KMWN 131147Z 13043G58KT 1/16SM FZRA PL FZFG VV001 M01/M01 RMK PLB40 VRY LGT GICG 60074 70148 931000 10017 21013

A distant view of the meteorological observatory located atop Mount Washington.

Mount Washington, New Hampshire, in December, 2005. If you look closely, you can see the Mount Washington Observatory (opens in a new window) (here's a close-up aerial view (opens in a new window) of the Observatory).
Credit: Mount Washington Observatory Photo

Winds were blowing from 130 degrees sustained at 43 knots and gusting to 58 knots. That's just a ho-hum "breeze" compared to the world-record setting 231 miles an hour (opens in a new window) clocked at the summit on April 12, 1934. Yes, Mount Washington (opens in a new window) is a windy place indeed.

In stark contrast to windy Mount Washington, a METAR entry of 00000KT represents a calm wind. When the wind is light (a speed of six knots or less) and it varies in direction with time, the data encoded on a METAR might look like VRB004KT (variable direction blowing at four knots). If the wind speed is greater than six knots and the wind direction varies, the data encoded on a METAR might look like "32014KT 290V350". Translation: the wind direction was 320 degrees and the wind speed was 14 knots, but the direction varied from 290 to 350 degrees. Such a varying wind direction might occur in the immediate wake of a cold front. Variable wind directions are always encoded in the clockwise direction (just for the record).

Okay, back to decoding the Concord METAR. 1 3/4SM translates to a horizontal visibility of one and three-fourths statute miles (opens in a new window). Visibilities below one fourth of a mile appear as M1/4SM in METARS from automated stations.

+RA BR is the present weather in this case. "+RA" represents heavy rain, while "BR" is the METAR code for mist. You should become familiar with the other codes for precipitation and restrictions to visibility.

The various codes for reporting present weather on METARS.
QUALIFIER

INTENSITY OR
PROXIMITY
1
QUALIFIER

DESCRIPTOR

2
WEATHER PHENOMENA
 
PRECIPITATION

3
WEATHER PHENOMENA

OBSCURATION

4
WEATHER PHENOMENA

OTHER

5
- Light
 
Moderate2
 
+ Heavy
 
VC In the
Vicinity3
MI Shallow
PR Partial
BC Patches
DR Low Drifting
BL Blowing
SH Shower(s)
TS Thunderstorm
FZ Freezing
DZ Drizzle
RA Rain
SN Snow
SG Snow Grains
IC Ice Crystals
PE Ice Pellets
GR Hail
GS Small Hail
and/or Snow Pellets
UP Unknown
Precipitation
BR Mist
FG Fog
FU Smoke
VA Volcanic Ash
DU Widespread Dust
SA Sand
HZ Haze
PY Spray
PO Well- Developed
Dust/Sand Whirls
SQ Squalls
FC Funnel Cloud
Tornado
Waterspout4
SS Sandstorm
SS Duststorm
  1. The weather groups shall be constructed by considering columns 1 to 5 in the table above in sequence, i.e. intensity, followed by description, followed by weather phenomena, e.g. heavy rain shower(s) is coded as +SHRA
  2. To denote moderate intensity no entry or symbol is used.
  3. See paragraph 8.4.1.a.(2), 8.5, and 8.5.1 for vicinity definitions.
  4. Tornados and waterspouts shall be coded as +FC.

OVC010 represents the current sky condition, which, at this time, was overcast at 1000 feet (the three-digit code corresponds to the ceiling (or cloud base) in hundreds of feet). In general, please note that METARS can list data about more than one layer of clouds. Moreover, when the sky is obscured, METARS should include the vertical visibility in hundreds of feet. For example, VV004 corresponds to an obscured sky with a vertical visibility of 400 feet.

A3005 is the altimeter (opens in a new window) setting - in this case, 30.05 inches of mercury.

09/07 represent the temperature and the dew point reported to the nearest degree Celsius (more precise data sometimes appear near the end of METARS - I will showcase the "T group" in just a moment or two). In this observation, the temperature was 9 degrees Celsius and the dew point was 7 degrees Celsius.

RMK stands for "Remarks." There are a multitude of remarks (see heading 12.7.1 of the Federal Handbook (opens in a new window)). In this case, A02 indicates that the automated station has a precipitation sensor (A01 means that the automated station does not have a precipitation sensor).

CIG 007V013. When the ceiling (as measured by a ceilometer (opens in a new window)) is less than 3000 feet and variable, this group typically appears in METARS. In this case, the ceiling was variable between 700 and 1300 feet.

SLP177 indicates the sea-level pressure in millibars using the same convention as on a standard station model (1017.7 mb, in this case).

P0015 is the hourly liquid precipitation (in hundredths of an inch). In this case, 0.15 inches of rain fell in the hour ending at 12Z.

60056 represents the three- or six-hour liquid precipitation (in hundredths of an inch). In this case, 0.56 inches of rain fell in the six-hour period ending at 12Z. for the record, six-hour totals appear at 00Z, 06z, 12Z and 18Z. Three-hour totals appear at 03Z, 09Z, 15Z and 21Z. 60000 translates to a trace of liquid precipitation during the three- or six-hour period.

70066 indicates the total 24-hour liquid precipitation ending at 12Z (in hundredths of an inch). In this case, 0.66 inches fell at Concord from 12Z on May 12 to 12Z on May 13.

T00890072 indicates the hourly temperature and dew point to the nearest tenth of a degree Celsius. You will likely want to follow this group as you monitor your forecasts (note the differences between these actual 12Z observations and the 09/07 temperature / dew-point group). The "0" after the "T" indicates that the temperature and dew point are higher than 0 degrees Celsius (a "1" will follow the "T" when the dew point temperature and /or the temperature is / are less than 0 degrees Celsius). In this case, the 12Z temperature at Concord was 8.9 degrees Celsius and the dew point was 7.2 degrees Celsius.

10094 represents the highest temperature, in tenths of a degree Celsius, during the six-hour period ending at 12Z (in this case). If the digit following the "1" is a "0," then the temperature is higher than 0 degrees Celsius (a "1" following the "1" indicates that the temperature is less than 0 degrees Celsius). So the highest temperature at Concord between 06Z and 12Z on May 13, 2006, was 9.4 degrees Celsius. For the record, the "1" group is reported at 00Z, 06Z, 12Z and 18Z.

20089 indicates the lowest temperature during the six-hour period ending at 12Z (in this case). If the digit following the "2" is a "0," then the temperature is higher than 0 degrees Celsius (a "1" following the "2" indicates that the temperature is less than 0 degrees Celsius). So the lowest temperature at Concord between 06Z and 12Z on May 13, 2006, was 8.9 degrees Celsius. Like the "1" group, the "2" group is reported at 00Z, 06Z, 12Z and 18Z.

53018 indicates the pressure tendency (the "5 group"). The digit following the "5," which can vary from 0 to 8, describes the behavior of the pressure over the past three hours (for guidance, consult the table below). The last three digits represent the amount of pressure change in tenths of a millibar. Thus, the pressure at Concord increased 1.8 mb in the three-hour period ending at 12Z on May 13, 2006.

Descriptions of the behavior of pressure over the past three hours and the corresponding METAR code

Primary Requirement: Atmospheric pressure now higher than 3 hours ago
DescriptionMETAR Code Figure
Increasing, then decreasing.0
Increasing, then steady, or increasing then increasing more slowly.1
Increasing steadily or unsteadily.2
Decreasing or steady, then increasing; or increasing then increasing more rapidly.3
Primary Requirement: Atmospheric pressure now same as 3 hours ago
DescriptionMETAR Code Figure
Increasing, then decreasing.0
Steady4
Decreasing then increasing.5
Primary Requirement: Atmospheric pressure now lower than 3 hours ago
DescriptionMETAR Code Figure
Decreasing, then increasing.5
Decreasing, then steady, or decreasing then decreasing more slowly.6
Decreasing steadily or unsteadily.7
Steady or increasing, then decreasing; or decreasing then decreasing more rapidly.8

I realize that translating one METAR hardly qualifies as an entire lesson, but at least you now know the general guidelines and where to find information in case you run across a METAR that gives you pause. I encourage you to expand your aptitude for decoding METARS. They hold a lot of information! A good place to view raw METARs and their decoded counterpart is the surface section of the Real-Time Weather Data page at NCAR (opens in a new window). Notice by entering the 4-letter ICAO identifier for any station you can get a series of raw or translated METARS, which can be a great way to practice your skills!

mjg8

Lesson 3. Remote Sensing of the Atmosphere

Lesson 3. Remote Sensing of the Atmosphere

Motivate...

By this point in the course, you've already encountered many different weather observations (temperature dew point, wind, etc.). But, the observations we've learned about so far have something in common: They're collected by a sensor in direct contact with the medium being measured (called in situ measurements). Obviously, such measurements aren't possible over the entire breadth and depth of the atmosphere. We can't have weather stations covering every single point on Earth and throughout the atmosphere!

To help fill the many gaps between our direct measurements, we need to measure the atmosphere from afar, or "remotely." Remote sensing is just that -- taking a measurement without having a sensor in direct contact with the medium being measured. As an example, your body contains both in situ sensors (your skin) and remote sensors (your eyes). You don't have to physically touch a red-hot stove element (opens in a new window) to know that it is hot. Your eyes can sense the light coming from the heating coil, and you then make an interpretation that the burner must be hot.

So what types of remote sensing instruments do meteorologists use? I'm sure that you are very familiar with satellite and radar images shown on TV weathercasts and available online or on your favorite weather app. These come from two very important types of remote sensing observations, and we will cover them in depth in this lesson. In addition to radar and common satellite images, many more types of remote sensing data exist, which measure a vast array of atmospheric properties. Although many of these data lie beyond the scope of this course, they all have something in common: All remote sensing data is based on measurements of electromagnetic radiation.

A collage of remote-sensing images.

Meteorologists use a vast array of remote sensing instruments to measure the atmosphere. The key to properly interpreting each data set is to understand the advantages and limitations of the instrument. To aid in this understanding, you must first familiarize yourself with the properties and laws of electromagnetic radiation.
Credit: David Babb

Though the word "radiation" generally carries the tone of dire consequences for much of the public, meteorologists routinely and harmlessly harness part of the broad spectrum of electromagnetic radiation to help them diagnose the present state of the atmosphere and then make predictions. One of the most important things to keep in mind when using remote sensing data is that no perfect, one-size-fits-all remote sensors exist. All remote sensing instruments have limitations! Each type of remote sensing instrument is designed to measure something specific, and often it's not actually what you're actually interested in observing! The measurements taken by remote sensors only become useful when interpreted or converted into the observations that you really desire, but to make this conversion, we have to make assumptions. As in any aspect of life, sometimes assumptions can lead us astray, and ignoring the limitations of remote sensing data is a sure invitation for making mistakes.

Before we get into how to use satellite and radar imagery in weather forecasting, we have to start with the basics of radiation. Though this topic may seem more like physics than meteorology to you, I'd argue that good weather forecasters must understand the underlying science behind satellite and radar imagery in order to effectively and correctly use them. Let's get started!

Lesson Objectives

After completing this lesson, you should be able to:

  • explain what is meant by the electromagnetic spectrum and list what portions of the EM spectrum are used in meteorology remote sensing. (2)
  • describe the four key laws of radiation: Plank's, Wein's, Stefen-Boltzmann, and Kirchhoff's Laws.(2)
  • explain the three fundamental processes that can occur when radiation encounters a medium: transmission, absorption, and scattering.(2)
  • list and explain the major classifications of clouds typically observed in the atmosphere, as well as identify these cloud types from photographs.(1)
  • distinguish between the two basic types of meteorological satellites.(2)
  • explain the process of creating a visible satellite image and correctly interpret visible satellite images.(1,2)
  • explain the process of creating an infrared satellite image and correctly interpret infrared satellite images.(1,2)
  • explain the process of creating a water vapor satellite image and correctly interpret water vapor satellite images.(1,2)
  • explain how radar imagery is created, interpret radar imagery, and explain some meteorological factors that can affect the interpretation of radar imagery.(1,2)
  • distinguish between various types of remote sensing imagery, taking care to only interpret attributes of the atmosphere provided by each image type.(1)

(Numbers denote mapping to course objectives)

mjg8

Shedding Light on the Electromagnetic Spectrum

Shedding Light on the Electromagnetic Spectrum

Prioritize...

At the completion of this section, you should be able to describe what is meant by "electromagnetic radiation" and how it is generated. You should also be able to explain the various types of electromagnetic radiation, specifically the portions of the electromagnetic spectrum that meteorologists use to observe the atmosphere.

Read...

If we're going to talk about remote sensing, we have to start by talking about radiation. While the mention of "radiation" may conjure up thoughts about nuclear reactors or nuclear bombs, it turns out that the scientific use of the term "radiation" is considerably more broad. Radiation is defined as the emission and transfer of energy via high-energy particles (photons) or electromagnetic waves. In fact, the vast majority of radiation that you encounter on a daily basis has nothing to do with nuclear radiation at all. From an everyday light bulb, to the microwave that heats your frozen lunch, to the mobile phone that you use daily, you're surrounded by devices that make use of radiation. Even light from the sun is a form of radiation, so radiation is occurring all around you!

A boy at the edge of a pond making ripples with his hand.

Just as moving your hand back and forth creates ripples on a pond, an oscillating electron creates electromagnetic waves that propagate away from the source.
Credit: Andrew and the Pond more / Ethan Fox / CC BY-NC 2.0 (opens in a new window)

At some point in a science class, you probably studied the electromagnetic ("EM") spectrum of radiation, but how is this electromagnetic spectrum created? To begin with, you probably know that the building blocks of all matter are atoms and molecules. Within these atoms and molecules are smaller particles which have positive and negative charges -- protons and electrons, respectively. These charged particles tend to oscillate or vibrate (especially electrons). Without getting into the details, physics tells us that any charged particle like an electron has an electrical field surrounding it (electrical charges and electrical fields go hand-in-hand). Furthermore, moving charges also possess magnetic fields. Thus, when an electron oscillates, its surrounding electric and magnetic fields change. Like moving your hand rapidly back and forth in a pool of water, oscillating electrons send out ripples of energy (that is, "waves") that have both electrical and magnetic properties (hence, electro - magnetic radiation).

So, how is it that different kinds of electromagnetic waves exist to create an entire spectrum? The wavelength of any wave is simply the distance between two consecutive similar points on the wave (for example from wave crest to wave crest). Now think about our pond analogy above. If you move your hand slowly in the water, you will create a few waves with long wavelengths. However, if you move your hand rapidly in the water, you create lots of waves with very short wavelengths. The same is true for an oscillating electron. If the oscillation is very quick (we say the oscillation has a high frequency), then the EM radiation produced will have a short wavelength. If the oscillation is slower (having a lower frequency) then the electromagnetic waves will have long wavelengths.

Now, the frequency at which electrons oscillate is essentially set by the temperature of the matter in which the electron resides (remember, we defined an object's temperature as the average kinetic energy of its atoms or molecules). The higher the temperature, the higher the frequency of oscillation. So, when temperature increases, the wavelength of the electromagnetic radiation emitted by the electron decreases. Conversely, as temperature decreases, the frequency of oscillation slows and the wavelength of the emitted electromagnetic radiation increases. For a visual, check out the short video below (0:57) demonstrating the relationship between oscillation frequency and wavelength.

PRESENTER: Let’s explore a simple model of how oscillation frequency is tied to the wavelength of electromagnetic radiation.

The frequency at which electrons oscillate is essentially set by the temperature of the matter in which the electron resides. Lower temperatures yield lower frequencies of oscillation. Here, we’ve set our temperature on the low side, and you can see the molecule oscillating fairly slowly, or in other words, at a low frequency. The wavelength of the emitted radiation is also relatively long.

But, when temperature increases, the oscillations get faster, which makes for a higher oscillation frequency. This high frequency means that the emitted electromagnetic radiation has a relatively short wavelength. For comparison again, we can decrease our temperature to watch the oscillation frequency slow, and the wavelength of the emitted radiation increase.

Before leaving this discussion, let me add a quick caveat: We have discussed the generation of EM radiation by a single oscillating charged molecule. In reality, matter exists as a system of charged particles, which means that the resulting electromagnetic radiation field is much more complex than I have outlined here. We defined temperature by the average motion of the molecules because the motion of individual molecules varies and not all molecules have the same energy state. This means than a spectrum of electromagnetic radiation is generated from any system of matter that contains many charged particles, all oscillating at different frequencies. I should also note that the vibrating molecule model for electromagnetic emission only explains the existence of low-energy waves (those having lower frequencies than visible light). High-frequency EM emissions are still generated by moving charges, but require a different mechanism to generate the high energy waves (there's more details in the Explore Further section below if you are interested.)

With that caveat out of the way, now look at the entire spectrum of electromagnetic radiation. First, note that the range in wavelengths for different types of electromagnetic radiation is staggering -- from hundreds of meters to the size of an atom's nucleus. Also note that visible light does indeed qualify as electromagnetic radiation, despite taking up only a tiny sliver of the entire spectrum. This means that human eyes are completely blind to almost all electromagnetic radiation (most wavelengths are invisible to the naked eye).

A chart of the various types of EM radiation along with a comparison of wavelengths to common objects.

The spectrum of electromagnetic radiation. In the long-wave portion of the spectrum, radio and microwaves with wavelengths of hundreds of meters to a few millimeters dominate. As wavelengths decrease to a range of 10s of microns to 1/100th of a micron (the size of a bacterium or virus) we label these emissions as infrared, visible, and ultraviolet light. Finally, in the very short-wave portion of the spectrum, with wavelengths of less than a nanometer (smaller than an individual molecules and atoms), X-ray and gamma ray emissions can be found.
Credit: David Babb

For atmospheric remote sensing, we use electromagnetic radiation in the microwave, infrared, and visible bands. Perhaps most familiar to you is the visible portion of the electromagnetic spectrum. Indeed, wavelengths of EM radiation that span from approximately four tenths of a micron (a micron is one-millionth of a meter) to a little more than seven tenths of a micron compose the part of the spectrum that meteorologists use to generate "visible" satellite images (which we'll cover later in the lesson).

Beyond the longest wavelengths associated with visible light lies the infrared ("beyond red") band of the electromagnetic spectrum. A majority of the infrared spectrum, spanning from approximately 3 to 100 microns, essentially constitutes "terrestrial radiation" because the oscillating charges that emit at these wavelengths are consistent with temperatures commonly observed on this planet as well as the Earth's atmosphere. Thus, terrestrial radiation lends itself to be used in infrared satellite imagery (of which there are several applications we'll study soon).

Microwaves are next in line in the electromagnetic spectrum's hierarchy, with wavelengths spanning from 100 microns to about 30 centimeters. Most radar imagery used in weather forecasting employs artificially produced microwaves ranging in wavelength from 3 to 10 centimeters (more on radar later in the lesson).

Now that you know the terminology behind the different regions of the electromagnetic spectrum, we need to discuss the properties by which objects emit radiation. These properties have been grouped into what I call the "four laws of radiation." Read on.

Explore Further...

As I mentioned previously, the discussion in this section focused on the generation of low-energy electromagnetic waves (those with lower frequencies than visible light). If you want explore further than what I present here, many online sources discuss the various regions of the electromagnetic spectrum. For starters, check out: the Wikipedia page on the electromagnetic spectrum (opens in a new window).

Above the visible portion of the electromagnetic spectrum is the very short wavelength region that includes gamma rays, x-rays and ultraviolet light. The shortest wavelengths belong to gamma rays, which have wavelengths that are as short as one trillionth of a meter (unimaginably small). It turns out that the energy required for matter to emit electromagnetic radiation with wavelengths on the order of a few microns (or less), surpasses that which can be generated by an oscillating molecule. In fact, at such energies, the molecular and atomic bonds may break down completely, leaving only single atoms (or even single electrons!). Therefore, a few new mechanisms are needed to explain very short-wave EM emissions.

Perhaps you remember the Bohr model of an atom (opens in a new window) from high school chemistry that shows the electrons orbiting a nucleus of protons and neutrons (like a mini solar system). Suffice to say, things are a bit more complicated than that, but I'll stick with this model for simplicity. In an unenergized state (called the base state), an atom has a number of electrons in various orbits (or shells) around the nucleus. However, if sufficient energy is added to the atom, one or more of its electrons will be ejected into higher orbits around the nucleus (because they have more energy, they can better overcome the pull of the nucleus). Then, when these electrons fall back down to their original orbit, they must jettison the extra energy. They emit this energy in the form of a photon (a small packet of EM radiation), that has a frequency which corresponds to the energy released. Such photons are typically found in the near-IR, visible, and ultraviolet portions of the EM spectrum.

At even higher temperatures, the electrons may even break their bonds with the atomic nucleus itself, forming what is known as a plasma. Plasmas are a fourth state of matter (not a solid, liquid, or gas) that consist of positive ions (left over atomic nuclei) and free electrons. In a plasma, electromagnetic radiation is generated when the speed or direction of an electron is altered by a positive ion or another electron. Because of the unrestrained nature of electrons within a plasma, they can travel at tremendous speeds and thus can generate very high-energy photons. The generation of ultraviolet waves, X-rays, and gamma rays are typically from plasmas.

Although such high-energy radiation can be generated artificially (the medical use of X-rays, for example), most of the sources for natural high-energy EM emission originate in space. The plasma of our sun emits copious amounts of X-rays and ultraviolet radiation, as well as gamma rays during eruptions of solar flares. Furthermore, the most prodigious gamma-ray bursts come from interstellar events such as supernovae, black holes, and quasars. Check out the image below, which shows gamma ray emission from the entire sky. Note that the strongest gamma ray emissions are concentrated along the disk of the Milky Way Galaxy.

A depiction of the night sky in the gamma ray portion of the EM spectrum.

This all-sky view from the Gamma-ray Large Area Space Telescope (GLAST) reveals bright emission in the plane of the Milky Way (center), bright pulsars and super-massive black holes.
Credit: NASA/DOE/International LAT Team
mjg8

The Four Laws of Radiation

The Four Laws of Radiation

Prioritize...

After completing this section, you should be able to recite and explain the four laws of radiation. Your explanations should contain specific examples because you will be required to apply these laws in your understanding of atmospheric remote sensing.

Read...

In order to best make use of the of information that comes to us via the electromagnetic spectrum, we need to understand some basic properties of radiation. A complete treatment on the subject of radiation theory would take an entire course at least (indeed, folks pursuing a degree in meteorology are usually required to take a Radiative Transfer course). Instead, you just need to know the fundamental principles describing the electromagnetic radiation that originates from an object and how that radiation travels through space (discussed in the next section).

For electromagnetic radiation, there are four "laws" that describe the type and amount of energy being emitted by an object. In science, a law is used to describe a body of observations. At the time the law is established, no exceptions have been found that contradict it. The difference between a law and a theory is that a law simply describes something, while a theory tries to explain "why" something occurs. As you read through the laws below, think about observations from everyday life that might support the existence of each law.

Planck's Law

Planck's Law can be generalized as such: Every object emits radiation at all times and at all wavelengths. Does that surprise you? We know that the sun emits visible light (below left), infrared waves (opens in a new window), and ultraviolet waves (below right), but did you know that the sun also emits microwaves, radio waves, and X-rays (opens in a new window)? Of course, the sun is a big nuclear furnace, so it makes sense that it emits all sorts of electromagnetic radiation. However, Plank's Law states that every object emits over the entire electromagnetic spectrum. That means that you emit radiation at all wavelengths, and so does everything around you!

A view of the sun in the visible and ultraviolet portions of the spectrum.

Two images of the sun taken at different wavelengths of the electromagnetic spectrum. The left image shows the sun's emission at a wavelength in the visible range. The right image is the ultraviolet emission of the sun. Note: colors in these images and the ones above are deceptive. There is no sense of "color" in spectral regions other than visible light. The use of color in these "false-color" images is only used as an aid to show radiation intensity at one particular wavelength.
Credit: NASA/JPL

Now, before you dismiss this statement out-of-hand, let me say that you are not emitting X-rays in any measurable amount (thank goodness!). The mathematics behind Planck's Law hinge on the fact that there is a wide distribution of vibration speeds for the molecules in a substance. This means that it is possible for matter to emit radiation at any wavelength, and in fact it does, but the amount X-rays you're currently emitting, for example, is unimaginably small.

Another common misconception that Planck's Law dispels is that matter selectively emits radiation. Consider what happens when you turn off a light bulb. Is it still emitting radiation? You might be tempted to say "no" because the light is off. However, Planck's Law tells us that while the light bulb may no longer be emitting radiation that we can see, it is still emitting at all wavelengths (most likely, it is emitting copious amounts of infrared radiation). Another example that you hear occasionally on TV weathercasts goes something like this: "When the sun sets, the ground begins to emit infrared radiation..." That's just not how it works. The ground doesn't "start" emitting when the sun sets. Planck's Law tells us that the ground is always emitting infrared radiation (and radiation at other wavelengths), a fact that we'll explore later on in this lesson.

Wein's Law

So, Planck's Law tells us that all matter emits radiation at all wavelengths all the time, but there's a catch: Matter does not emit radiation at all wavelengths equally. This is where the next radiation law comes in. Wein's Law states that the wavelength of peak emission is inversely proportional to the temperature of the emitting object. Put another way, the hotter the object, the shorter the wavelength of maximum emission. You have probably observed this law in action all the time without even realizing it. Want to know what I mean? Check out this steel bar. (opens in a new window) Which end might you pick up? Certainly not the right end... it looks hot. Why does it "look hot?"

Well, for starters, the peak emission for the steel bar (even the part that looks really hot) is in the infrared part of the spectrum. But, the right side of the bar is hotter than the left, and therefore the right side has a shorter wavelength of peak emission compared to the left side. You see this shift in the peak emission wavelength as a color change from red to orange to yellow as the metal's temperature increases. In fact, the right side is hot enough that its peak emission is pretty close to the visible part of the spectrum (which has shorter wavelengths than infrared); therefore, a significant amount of visible light is also being emitted from the steel.

Judging by the look of this photograph, the steel has a temperature of roughly 1500 kelvins, resulting in a max emission wavelength of 2 microns (remember visible light is 0.4-0.7 microns). Here is a chart showing how I estimated the steel temperature (opens in a new window). To the left of the visibly red metal, the bar is still likely several hundred degrees Celsius. However, in this section of the bar, the peak emission wavelength is far into the infrared portion of the spectrum, and no visible light emission is discernible with the human eye.

So, now that we've established Wein's Law, how do we apply it to the emission sources that affect the atmosphere? Consider the chart below, showing the emission curves (called Planck functions) for both the sun and the earth.

A graph of the energy output of the sun versus the earth as a function of wavelength.

The emission spectrum of the sun (orange curve) compared to the earth's emission (dark red curve). The x-axis shows wavelength in factors of 10 (called a "log scale"). The y-axis is the amount of energy per unit area per unit time per unit wavelength. I have kept the units arbitrary because they are quite messy. The important message is that the sun's emission spectrum peaks in the visible spectrum, while the earth's emission spectrum peaks in the infrared (because of Wien's Law).
Credit: David Babb

Note the idealized spectrum for the earth's emission (dark red line) of electromagnetic radiation compared to the sun's electromagnetic spectrum (orange line). The radiating temperature of the sun is nearly 6,000 degrees Celsius compared to the earth's measly 15 degrees Celsius. This means that given its high radiating temperature, the sun's peak emission occurs in the visible light portion of the spectrum, near 0.5 microns (toward the short-wave end of the EM spectrum). That wavelength is also the reason why we see the sun as having a yellow hue. Meanwhile, the earth's peak emission is located in the infrared portion of the electromagnetic spectrum (having longer wavelengths, by comparison).

By the way, even though we see the sun as having a yellow quality because of its peak emission near 0.5 microns, other stars can take on a different look. Some stars in our galaxy are somewhat cooler and exhibit a reddish hue, while others are much hotter and appear blue. The constellation Orion contains the red supergiant Betelgeuse and several blue supergiants, the largest being Rigel and Bellatrix. Can you spot them in this photograph of Orion (opens in a new window)?

Stefan–Boltzmann Law

Look again at the graph of the sun's emission curve versus the earth's emission curve (above), and take note of the energy values on the left axis (for the sun) and right axis (for the earth). The first thing to notice is that the energy values are given in powers of 10 (that is, 106 is equal to 1,000,000). This means that if we compare the peak emissions from the earth and sun we see that the sun at its peak wavelength emits nearly 3,000,000 times more energy than the earth at its peak. In fact, if we add up the total energy emitted by each body (by adding the energy contribution at each wavelength), the sun emits over 180,000 times more energy per unit area than the earth!

I calculated the number above using the third radiation law that you need to know, the Stefan-Boltzmann Law. The Stefan-Boltzmann Law states that the total amount of energy per unit area emitted by an object is proportional to the 4th power of the temperature. You won't need to do any specific calculations with the Stefan-Boltzmann Law, but you should understand that as temperature increases, so does the total amount of energy per unit area emitted by an object (hotter objects emit more total energy per unit area than colder objects). This relationship is particularly useful when we want to understand how much energy the earth's surface emits in the form of infrared radiation. It will also come in handy when we study the interpretation of satellite observations of the earth, later on.

Kirchhoff's Law

In the preceding radiation laws, we have been talking about the ideal amount of radiation that an object can emit. This theoretical limit is called "black body radiation." However, the actual radiation emitted by an object can be much less than the ideal, especially at certain wavelengths. Kirchhoff's Law describes the linkage between an object's ability to emit at a particular wavelength with its ability to absorb ("take in") radiation at that same wavelength. In plain language, Kirchhoff's Law states that for an object with constant temperature, an object that absorbs radiation efficiently at a particular wavelength will also emit radiation efficiently at that wavelength. One implication of Kirchhoff's law is that if we want to measure a particular constituent in the atmosphere such as water vapor, we need to choose a wavelength that water vapor emits efficiently (otherwise we wouldn't detect it). However, since water vapor readily emits at our chosen wavelength, it also readily absorbs radiation at this wavelength, which presents some challenges for our measurements!

We'll look at the implications of Kirchhoff's Law in a later section. For now, we need to wrap-up our look at radiation by examining at the possible fates of a "beam" of radiation as it passes through a medium. Read on.

mjg8

The Roads Traveled Most by Radiation

The Roads Traveled Most by Radiation

Prioritize...

After completing this section, you should be able to describe absorption, transmission, and scattering as they pertain to electromagnetic radiation passing through a medium.

Read...

Unlike the traveler in Robert Frost's poem, The Road Not Taken (opens in a new window), electromagnetic radiation doesn't have much of a choice whenever it encounters objects in its direct path. Indeed, the fate of electromagnetic radiation depends on wavelength and the physical composition of the atoms and molecules in the medium that it is passing through. It is impractical (and impossible) to sort through each atom and molecule in a given object in order to judge its potential effect on the radiation that strikes it ("incident" radiation), so we will consider chunks of matter as whole objects in order to describe their overall effect on incident radiation.

When radiation first encounters some medium (whether it be a collection of gases, a liquid, or a solid), only three things can happen to that radiation:

  • transmission -- the radiation passes through the medium unaffected
  • absorption -- the radiation "beam" gets extinguished within the medium
  • scattering -- the radiation interacts with the medium such that its direction of "travel" changes

In most cases, all three processes can and do occur to some degree. To help you visualize these potential outcomes, check out the brief video (1:59) below:

When radiation encounters some medium, three things can happen to that radiation. One possibility is that the radiation could pass right through medium unaffected, which is called transmission. Now, 100 percent perfect transmission is pretty rare, except within the vacuum of space. Almost always, there’s at least a little energy that isn’t transmitted through unaffected. An example of a medium with a high transmission value is window glass. Visible light passes through a thin sheet of glass mostly undisturbed, which is why we can see objects clearly on the other side. We call such mediums “transparent” while mediums having low transmission values are called “opaque.” I should point out that the transmission properties of a medium depend on wavelength. An object that is transparent in visible wavelengths might be opaque at infrared wavelengths for example.

The next possibility is called absorption. That’s when the radiation effectively gets extinguished within the medium. When absorption occurs, the radiation is taken up by the matter (typically by the electrons of the atoms) and converted to other forms of energy like heat energy. As with transmission, the amount of energy that an object absorbs depends on the wavelength of the radiation and the physical make-up of the object. For example, freshly fallen snow absorbs little direct sunlight, but snow readily absorbs infrared radiation.

The final possibility is called scattering. That’s when radiation interacts with matter in a way that changes its direction of travel. Scattering can occur in all directions, although some directions are preferred, depending on the size and composition of the particles involved in the scattering event. If the radiation encounters a scattering event and continues on in a forward direction, the event is called "forward-scattering." Likewise, objects can also back-scatter radiation, meaning that they redirect the radiation in all directions back toward the source.

Credit: Penn State

I should point out that I'll sometimes use the word "reflection" as a loose substitute for the "back-scattering" (scattering back toward the radiation source) described in the video, but there's a big difference between this loose use of "reflection" and the classic, pure interpretation of "reflection." Pure reflection means that the angle at which radiation strikes an object must equal the angle at which the radiation is redirected from the object (think about how a billiard ball bounces off a bumper on a pool table). Furthermore, in some rare cases, the scattered radiation may retain the exact same direction that it initially had before the scattering event. When this occurs, the scattered light is counted in the "transmission" category (because it seemingly emerged unchanged from the medium).

Now let's see these processes (particularly absorption and scattering) in action in the atmosphere. First, the atmosphere, like snow (as mentioned in the video), is a highly discriminating absorber (it only absorbs certain wavelengths of the electromagnetic spectrum). The plot of absorption spectra by various gases (below) indicates how efficiently certain gases and the atmosphere, taken as a whole, absorb various wavelengths of electromagnetic radiation. To interpret the graph, note the "0 to 1" scale on the left of the plot, indicating zero percent absorption and 100 percent absorption, respectively. At any specific wavelength, the upward reach of the color shading indicates the percentage of absorption by a particular gas (or the atmosphere, taken as a whole).

A graphic to show the possible fates for radiation passing through a medium.

The absorption spectra of various gases in the atmosphere, and of the atmosphere as a whole. The upward reach of each color shading depicts the percentage of absorption by a particular gas (or the atmosphere as a whole).
Credit: David Babb

For example, focus your attention on the row for oxygen and ozone, labeled "O2 and O3." Note, to the left of this label, that nearly 100 percent of the radiation emitted at wavelengths ranging from 0.1 to about 0.3 microns is absorbed. Recall that these wavelengths correspond to potentially dangerous ultraviolet radiation emitted by the sun. Ozone, a gas composed of three oxygen atoms (O3), absorbs much of the incoming ultraviolet radiation. Most of this absorption takes place in the stratosphere, which is a layer that spans from 10 to 30 miles above the Earth's surface. Thank goodness for ozone in the stratosphere! Otherwise, cases of skin cancer and other afflictions associated with overexposure to the sun would likely be much more rampant in our society than they actually are.

Pocket laser with the beam visible because of dust in the air.

You can see this laser beam only because light is being scattered by small dust particles in the air. If no scattering were taking place, all of the light would continue on in its original direction (and would thus not reach the camera lens).

Scattering, on the other hand, makes things look the way they do. You can't see objects if visible light isn't scattered to your eyes. Check out the great example of scattering on the right. A laser produces a highly focused beam of light waves, all traveling in the same direction. However, since you can see the beam, you know that some of the light is being scattered out of the beam towards the camera lens. This scattering is likely produced by small particles of dust in the air.

I should point out that scattering doesn't have to be a one-time event. Often, radiation will enter an object and encounter many (hundreds or thousands) of scattering events before emerging. This is what happens to make clouds appear white on top and darker on the bottom (cue the obligatory storm photo (opens in a new window)). It's also what makes snow, salt, sugar, and milk appear white. Furthermore, multiple scattering increases the time that the radiation resides in the medium (as it bounces around, unable to escape). This longer residence time increases the chance that the radiation will also be absorbed by the medium. A great example is the blue hue that ice can take on. Water (even in frozen form) tends to absorb red light at a faster rate than blue light, so over time with multiple scattering events, more blue light is scattered to our eyes (see below)!

An ice cave in a glacier in which the ice is giving off a blue hue.

Ice cave in Glacier Gray, Torres del Paine National Park, Chilean Patagonia. Multiple scattering and selective absorption within the glacial ice causes the dramatic blue tint.

Now that we have covered the behavior of the spectrum of electromagnetic radiation and how it travels through space, we need to shift gears and focus on something we ultimately want to measure via remote sensing -- clouds. The detection of clouds by satellites plays a crucial role in weather forecasting. In the next section, we will discuss the four different genres of clouds. By knowing the physical features of these clouds, you will be better prepared to identify specific types of clouds using satellite imagery. Read on.

mjg8

Clouds from Bottom to Top

Clouds from Bottom to Top

Prioritize...

At the completion of this section, you should be able to identify and describe the eleven major clouds types. They are: 3 high-level clouds (cirrus, cirrostratus, and cirrocumulus), 2 mid-level clouds (altostratus and altocumulus), 3 low-level clouds (stratus, stratocumulus, and nimbostratus), and 3 vertically developed clouds (fair-weather cumulus, cumulus congestus, and cumulonimbus).

Read...

Weather forecasters regularly look at clouds from above via satellite imagery, but before we interpret clouds on satellite images we need to learn how to classify specific clouds by observing them from the bottom, as we see them from the ground.

From the perspective of an observer standing on the Earth's surface, clouds can be classified by their physical appearance. Accordingly, there are essentially three basic cloud types:

  • Cirrus, which is synonymous with a "streak cloud" (detached filaments of clouds that literally streak across the blue sky).
  • Stratus, which, derived from Latin, translates to a "layered cloud."
  • Cumulus, which means "heap cloud."

As you learned in a previous lesson, meteorologists further classify clouds according to the height of their bases above the earth's surface.

Four Major Cloud Classifications

A wispy high cloud

High clouds observed over the middle latitudes typically reside at altitudes near and above 20,000 feet. At such rarefied altitudes, high clouds are composed of ice crystals.

 
Middle level clouds that look like cotton balls.

Middle clouds reside at an average altitude of ~10,000 feet. Keep in mind that middle clouds can form several thousand feet above or below the 10,000- foot marker. Middle clouds are composed of water droplets and/or ice crystals.

 
A foggy, rainy day at a lake.

Low clouds can form anywhere from the ground to an altitude of approximately 6,000 feet. For the record, fog is simply a low cloud in contact with the earth's surface.

 
A developing thunderstorm cloud.

Clouds of vertical development cannot be classified as high, middle, or low because they typically occupy more than one of the above three altitude markers. For example, the base of a tall cumulonimbus cloud often forms below 6,000 feet and then builds upward to an altitude far above 20,000 feet.

Just by knowing the three basic cloud types (cirrus, stratus, cumulus) and the four classifications (high, middle, low, and clouds of vertical development), along with their corresponding prefixes and suffixes, we can name lots of different types of clouds.

  • High clouds can either be "plain" cirrus, or we can add the prefix "cirro" to a suffix that describes their appearance (cirrostratus for high-altitude, layered clouds; cirrocumulus for high-altitude, "heap" clouds).
  • Middle clouds carry the prefix "alto" and also a suffix that describes their appearance (altostratus for mid-level, layered clouds; altocumulus for mid-level, "heap" clouds).
  • Clouds of vertical development always include the word "cumulus" or the prefix "cumulo," but can have various suffixes or other descriptive modifiers (like "fair-weather cumulus").
  • The names of low clouds have more variation. Low clouds can be referred to as plain "stratus" (if they're smooth and layered) or "stratocumulus" if they have both layered and heap-like characteristics, for example. If low, layered clouds are precipitating, they're called nimbostratus. The prefix "nimbo" comes from "nimbus," which means that this low cloud produces precipitation (note that nimbus can also be used as a suffix, as in cumulonimbus when a cumulus cloud is producing precipitation).

Learning to identify and describe the major cloud types is an important practical skill for any weather forecaster (see the Key Skill and Quiz Yourself sections below). Once you've spent ample time with those tools and are accustomed to looking at clouds from the bottom side, you're ready to look at clouds from the top side and tackle the principles of interpreting clouds on satellite imagery.

Key Skill...

Learning to identify the major cloud types can be a bit daunting. However, with some practice, you'll get the hang of it. To get started, spend some quality time right now going through the following interactive cloud atlas. It has everything you ever wanted to know about the names and descriptions of the eleven major cloud types that you should be familiar with in this course. Move your mouse over each red pin to see an example photo and description of that particular cloud type.

Quiz Yourself...

Feeling confident in your cloud identification skills? Take this quiz to see how you do.

Explore Further...

If you want to explore cloud identification further (or just look at some pretty cloud pictures), check out these online cloud atlases. I should point out that these sites delve into the details of cloud naming, which you are not required to know. Also, while I have explored these sites and found them to be accurate, you may find slight discrepancies in descriptions, etc. In such cases, please defer to descriptions listed in the course text rather than on these sites.

Cloud Atlas hosted by Penn State (opens in a new window): This atlas was created from images in the Karlsruhe Wolkenatlas (opens in a new window) (used with permission from Bernhard Mühr).

UCAR - Cloud Classifications (opens in a new window): This is a fairly exhaustive site on cloud classification.

mjg8

Observing Weather from Space

Observing Weather from Space

Prioritize...

At the end of this section, you should be able to distinguish between geostationary and polar-orbiting satellites. You should also be able to describe their differences and roles in observing the earth, and be able to identify a satellite image as being collected by a geostationary satellite or a polar-orbiting satellite.

Read...

Today, meteorologists have an ever-increasing number of sophisticated, computerized tools for weather analysis and forecasting. But, before 1960, meteorologists drew all their weather maps by hand and no useful computer models existed. Seems like the dark ages, right? Furthermore, before 1960, forecasters did not have weather satellites to afford them a birds-eye view of cloud patterns. The dark ages ended after NASA launched Tiros-I on April 1, 1960.

An early view of the earth from space taken by the Tiros satellite (pictured at right).

(Left) The first televised image from space captured by the TIROS-1 satellite (pictured right) on April 1, 1960.
Credit: NASA

Though the unrefined, fuzzy appearance of this image may seem crude and almost prehistoric, it was an eye-opener for weather forecasters, paving the way for new discoveries in meteorology (not to mention improved forecasts). Today, satellite imagery with high spatial resolution (opens in a new window) allows meteorologists to see fine details in cloud structures. For example, check out this close-up loop of the eye of Hurricane Ian making landfall in Florida in 2022 (opens in a new window). We've come a long way, wouldn't you agree?

Two types of flagships exist in the select fleet of weather satellites that routinely beam back images of Earth and the atmosphere -- geostationary satellites and polar-orbiting satellites.

Geostationary Satellites

Artist's rendering of GOES-16 in orbit.

An artist's rendering of GOES-16 in orbit.
Credit: NASA

Geostationary satellites orbit approximately 35,785 kilometers (22,236 miles) above the equator, completing one orbit every 24 hours. Thus, their orbit is synchronized with the rotation of the Earth about its axis, essentially fixing their position above the same point on the equator (hence the name "geostationary"). In the United States, the National Oceanic and Atmospheric Administration's (NOAA) geostationary satellites go by the name of "GOES" (Geostationary Operational Environmental Satellite) followed by a number. To get an idea of what a geostationary satellite looks like, check out the artist's rendering of GOES-16 on the right.

Two operational geostationary satellites currently orbit over the equator at 75 and 135 degrees west longitude, and, respectively, go by the generic names "GOES East" and "GOES West." GOES-East is in a good spot to keenly observe Atlantic hurricanes as well as weather systems over the eastern half of the United States. GOES-West is in better position to observe the eastern Pacific and the western half of the United States. If you are interested in learning more about the current condition of any particular GOES satellite, you can check out the GOES Spacecraft Status (opens in a new window) page run by the NOAA's Office of Satellite Operations.

From their extremely high vantage point in space, GOES-East and GOES-West can effectively scan about one-third of the Earth's surface. Their broad, fixed views of North America and adjacent oceans make our fleet of geostationary satellites very effective tools for operational weather forecasters, providing constant surveillance of atmospheric "triggers" that can spark thunderstorms, flash floods, snowstorms and hurricanes (among other things). Once threatening conditions develop, the broad, fixed view of geostationary satellites is especially handy because we can create loops of geostationary satellite imagery, which allow forecasters to monitor the movement of weather systems and other atmospheric features. For example, this loop of GOES satellite images (opens in a new window) from the afternoon of April 8, 2024 shows the movement of clouds across the United States. The dark spot that moves across the image is the shadow cast by a total solar eclipse (opens in a new window) (a rare feature to find on satellite imagery)!

Geostationary satellites are far from perfect, however. Geostationary satellites don't have a very good view of high latitudes because they're centered over the equator. Therefore, clouds at high latitudes become highly distorted and at latitudes poleward of approximately 70 degrees, geostationary satellites become essentially useless.

I don't want to leave you with the impression that the GOES program is unique, however. Other countries also own and operate geostationary weather satellites. For more on these satellite programs, check out the Explore Further section below.

Summary: Geostationary satellites provide fixed views of large areas of the earth's surface (a large portion of an entire hemisphere (opens in a new window), for example). The fact that their view is fixed over the same point on earth means that sequences of their images can be created to help forecasters track the movement and intensity of weather systems. The primary limitation of geostationary satellites is that they have a poor viewing angle for high latitudes and are essentially useless poleward of 70 degrees latitude.

Polar-Orbiting Satellites

Polar-orbiting satellites pick up the high-latitude slack left by geostationary satellites. In the figure below, note that the track of a polar orbiter runs nearly north-south above the earth and passes close to both poles, allowing these satellites to observe, for example, large polar storms (opens in a new window) and large Antarctic icebergs (opens in a new window). Polar-orbiting satellites orbit at an average altitude of 850 kilometers (about 500 miles), which is much, much lower than geostationary satellites.

Each polar orbiter has a track that is essentially fixed in space, and completes 14 orbits every day while Earth rotates beneath it. So, polar orbiters get a worldly view, but not all at once! Like making back-and-forth passes while mowing the lawn, these low-flying satellites scan the Earth in swaths (opens in a new window) roughly 2,500 to 3,000 kilometers wide, covering the entire earth twice every 24 hours.

A scaled drawing of earth, encircled by polar orbiting and geostationary satellites.

The orbits of geostationary and polar-orbiting satellites (drawn to scale).
Credit: David Babb

The appearance of a "lawn-mowing-like" swath against a data-void, dark background on a satellite image is a dead give-away that it came from a polar orbiter, as illustrated by this image from a polar-orbiter of Hurricane Michael in the Gulf of Mexico (opens in a new window) (credit: Johns Hopkins University (opens in a new window)) in early October, 2018. But, sometimes it's harder to tell whether an image came from a polar orbiter because some images are zoomed in enough that the swath can't be seen, like this image from a polar-orbiter of Hurricane Idalia in the Gulf of Mexico in late August, 2023 (opens in a new window). Polar orbiters are invaluable tools for tropical weather forecasters, providing a variety of specialized images to forecasters at the National Hurricane Center (opens in a new window) in Miami, Florida that they use to analyze storms during hurricane season.

NOAA operates polar-orbiting satellites through its Joint Polar Satellite System (JPSS). NOAA currently classifies the newest satellite as its "operational" polar orbiter, while slightly older satellites that continue to transmit data are classified as "secondary" or "backup" satellites. As a counterpart to the GOES satellites, the NOAA Office of Satellite Operations operates a JPSS Spacecraft Status (opens in a new window) page as well. NASA and the U.S. Department of Defense also operate many polar orbiters. All in all, thousands of polar-orbiting satellites are circling the earth in "low-earth orbit" sending back valuable data for everything from weather observation to communications applications to space-oriented research.

Summary: Polar-orbiting satellites orbit at a much lower altitude than geostationary satellites, and don't have a fixed view since the earth rotates beneath their paths. The benefit of polar-orbiters is that they can give us highly-detailed images, even at high latitudes. The main drawback is that they have a limited scanning width, and don't provide continuous coverage for any given area (like geostationary satellites do). A single image from a polar orbiter will often show a swath with sharply defined edges (opens in a new window) that mark the boundaries of what the satellite could see on a particular pass.

Data from satellites has truly revolutionized weather analysis and forecasting. Satellites can measure atmospheric temperatures, moisture, and winds, among other things. Roughly 80 percent of all data used to run computer forecast models comes from polar orbiting satellites alone, so satellites are a critical part of weather forecast operations around the globe! Now that you have some background about the different types of satellites providing crucial weather data, we'll turn our attention to interpreting basic types of satellite images.

Explore Further...

As I mentioned above, the GOES program is not unique, and other countries also own and operate geostationary weather satellites (check out this international perspective on geostationary weather satellites (opens in a new window)). But, geostationary satellites don't just cover weather. More than 600 geostationary satellites hover above the equator around the world! With the number of communications satellites increasing, the "geostationary parking lot" is getting pretty crowded. If you look at the time-lapse photograph below, which was taken by a telescope atop Kitt Peak in Arizona between 0230Z and 11Z on March 19, 2007 and covers just 9 percent of the geostationary orbit, you can see many bright dots, which are geostationary satellites. Keep in mind that hundreds of geostationary satellites have been launched since this time-lapse photo was taken, so "geostationary parking spots" are starting to come at a premium!

Star trails on a long exposure photograph. Geostationary satellites are seen as points rather than streaks.

A time lapse of a small portion of the geostationary orbit taken from atop Kitt Peak in Arizona from 0230Z to 11Z on March 19, 2007. The lines represent star trails, while the bright dots mark the positions of geostationary satellites.
Credit: Dave Dooling, National Solar Observatory

How do I know those dots are geostationary satellites? Well, when photographers take time-lapse images of the nighttime sky, the stars leave "star trails" (check out this time-lapse photograph above Mauna Kea (opens in a new window) in Hawaii and note the awesome star trails; by the way, moonlight illuminated the mountain and sky). Of course, the stars don't move. Rather, the earth rotates about its axis and thus the stars appear to move. Now look closely at the time-lapse of the nighttime sky over Mauna Kea. Note that you don't see the stars themselves, only their trails. In other words, you don't see stars as fixed dots because the Earth rotates on its axis during the period of the time-lapse photography.

That means, of course, that the bright, fixed dots in the midst of the belt of star trails are in geosynchronous orbit with the earth (they obviously didn't move during the time-lapse photography). I emphasize here that there's no way that the light reflected by the geostationary satellites would be sufficiently bright to see them clearly on just a single snapshot, but the long exposure allows them to stand out on this time-lapse photograph.

mjg8

Visible Satellite Imagery

Visible Satellite Imagery

Prioritize...

At the completion of this section, you should be able to describe how a satellite constructs an image in the visible spectrum (describe what's being measured) and how to interpret visible satellite images. Specifically, you should also be able to describe when it is appropriate to use visible satellite imagery and when it is not, and discern the relative thickness of various cloud types. After completing the sections on infrared imagery, water vapor imagery, and radar imagery, you should also be able distinguish visible satellite imagery from these other types of images.

Read...

Perhaps you've heard a television weathercaster use the phrase "visible satellite image" before. Perhaps you also thought, "Of course it's visible if I can see it!" So, why make the distinction that a satellite image is "visible?" In short, visible satellite images make use of the visible portion of the electromagnetic spectrum. If you recall the absorptivity graphic (opens in a new window) that I introduced earlier, notice that from a little less than 0.4 microns to about 0.7 microns, there's very little absorption of radiation at these wavelengths by the atmosphere. In other words, the atmosphere transmits most of the sun's visible light all the way to the Earth's surface.

Along the way, of course, clouds can reflect (scatter) some of the visible light back toward space. Moreover, in cloudless regions, where transmitted sunlight reaches the Earth's surface, land, oceans, deserts, glaciers, etc. unequally reflect some of that visible light back toward space (with limited absorption along the way). You might say that visible light generally gets a free pass while it travels through the atmosphere.

An instrument on the satellite, called an imaging radiometer, measures the intensity (brightness) of the visible light scattered back to the satellite. I should note that, unlike our eyes, or even a standard camera, this radiometer is tuned to measure only very small wavelength intervals (called "bands"), so the instrument does not see all wavelengths of visible light. The shading of clouds, the Earth's surface (in cloudless areas) and other features, such as smoke from a large forest fire (opens in a new window), the plume of an erupting volcano (opens in a new window), or even chunks of ice floating on a lake (opens in a new window) can all be seen on a visible satellite image because of the sunlight they reflect.

What determines the brightness of the visible light reflected back to the satellite and thus the shading of objects on a visible satellite image? Well to start with, we need to have a some source of light. To see what I mean, check out this visible satellite loop of the United States (opens in a new window) spanning from roughly 10Z to 17Z on May 1, 2024. The United States is completely dark at the beginning because 10Z was still before sunrise, but gradually we start to see clouds appear on the image from east to west as the sun rose and the reflected sunlight reached the satellite. The bottom line is that standard visible satellite imagery is only useful during the local daytime because we are measuring the amount of sunlight being reflected from clouds and the surface. If there's no sunlight, there's no image.

Now, assuming that it's during the day, the brightness of the visible light reflected by an object back to the satellite largely depends on the object's albedo, which is simply the percentage of light striking an object which gets reflected. Since the nature of Earth's surface varies from place to place (paved streets, forests, farm fields, water, etc.), the surface's albedo varies from place to place.

A visible satellite image of Pennsylvania and surrounding states.

A visible satellite image from GOES-East on a mostly clear October day. Note that bodies of water, which have a very low albedo (about 8 percent) appear darkest on the image, while the appearance of the land surface varies depending on its albedo (forests have a lower albedo than vegetation / agricultural fields, etc.). Here's the full-sized annotated image (opens in a new window) for a closer look.
Credit: College of DuPage

For example, take a look at the visible satellite image showing Pennsylvania and surrounding states (above). For the full effect, I recommend opening the full-sized version of the image (opens in a new window) for a better look. This particular day was nearly cloudless over Pennsylvania, so it gives us a great opportunity to really see how albedo makes a difference in the appearance of an object on visible satellite imagery. The surface in Pennsylvania hardly looks uniform, and that's a result of differing albedos associated with different surfaces. For example, bare soil reflects back about 35 percent of the visible light that strikes it. Vegetation has an albedo around 15 percent. By the way, bodies of water, with a representative albedo of only 8 percent, typically appear darkest on visible satellite images. See how the labeled bodies of water all look darker than the land surfaces?

If you want another comparison point, check out the "true color" satellite view of Pennsylvania and surrounding states from Google (opens in a new window). Can you see how the heavily forested areas of northern Pennsylvania match up with the darker shaded areas I've highlighted above? Can you see how the largely agricultural valleys of southeastern Pennsylvania (with their higher albedo) appear a bit brighter on the image above? Of course, the brightest areas on the visible satellite image above correspond to clouds, which have a much higher albedo than the surface of the earth under most circumstances.

But, many different types of clouds exist, and they all have varying albedos, too! To see what I mean, let's perform an experiment. First, start with a tank of water (upper left in the photograph below). Now add a just tablespoon of milk (upper right), which increases the albedo a bit. By adding the milk, some of the radiation that is passing front-to-back through the tank is being scattered back towards the observer and the water-milk mixture takes on a whitish appearance. In frames #3 and #4 (lower-left and lower-right, respectively), we've added more milk. Now we see that the tiny globules of milk fat further increase albedo as more of the visible light is being scattered back toward the observer, while the transmission of light through the water-milk mixture decreases (that's why the word "SURFACE" is obscured).

A 4-panel photographic image that shows the scattering effect that diluted milk can have.

A series of images demonstrating the effect of scattering particles on albedo. The experiment starts with a tank of pure water (image 1). Next, milk is added in increasing amounts. Notice that as milk is added, albedo increases as more light is reflected back to the observer (and less light is transmitted through the water-milk mixture).
Credit: David Babb

Some key observations that you should note from this experiment:

  • It didn't take many globules of milk fat (1 tablespoon of milk in a 10-gallon fish tank) to begin noticeably decreasing transmission and increasing albedo.
  • A medium can very quickly become "optically thick" -- that is, nearly zero transmission and a high albedo (a large percentage of light is reflected back to the observer)
  • In frame #4, we had only added a total of three tablespoons of milk to the tank (so the tank is still mostly filled with water, yet the transmission of light through the tank is minimal and the albedo is fairly high. Even if we switched to a tank filled with pure milk, the albedo would only increase marginally (maybe another 20 or 30 percent).

This last point is true of clouds as well; once a cloud becomes "thick enough," additional growth will not change its albedo (and appearance on visible satellite imagery) appreciably. The bottom line is that thick clouds, like cumulonimbus (which are associated with showers and thunderstorms), are like tall glasses of milk in the sky; they contain lots of light-scattering water droplets and/or ice crystals. Meteorologists say that such clouds have a "high-water (or ice) content" and can have albedos as high as 90 percent, which causes them to appear bright white on visible satellite imagery.

More subdued clouds, such as fog and stratus (opens in a new window), typically have a lower water content, and, in the spirit of the glass of water with just a little milk, a lower albedo. Indeed, the albedo for thin (shallow) fog and stratus can be as low as 40 percent. So, as a general rule, fog and stratus often appear as a duller white appearance compared to thicker, brighter cumulus clouds. Here's an example of valley fog (opens in a new window) over Pennsylvania and New York for reference. Wispy, thin cirrus clouds have the lowest albedo (low ice content), averaging about 30 percent. They appear almost grayish compared to the bright white of thick cumulonimbus clouds outlined on the satellite image below.

A visible satellite image highlighting how cirrus can appear.

A visible satellite image showing a line of cumulonimbus (squall line) with cirrus blowing east off the tops of the storms.
Credit: NOAA

As a general caveat to our discussion about determining shading on visible satellite images, I point out that brightness also depends on sun angle. For example, the brightness of the visible light reflected back to the satellite near sunset is limited, given the low sun angle and the relatively high position of the satellite. To see what I mean, check out this loop of visible satellite images (opens in a new window) showing severe thunderstorms, which erupted over Oklahoma and Kansas. The tall, thick cumulonimbus clouds that developed appear bright white initially, but as sunset approaches, the appearance of the clouds darkens. If you look closely at the images later in the loop, you'll be able to see tall cumulonimbus clouds casting shadows to the east. Pretty cool, eh?

One more quick point about interpreting visible images. Clouds aren't the only objects that can have very high albedos; therefore, they're not the only objects that can appear whitish. Indeed, cloudless, snow-covered regions can have albedos as high as 80 percent, and they also appear bright white on visible imagery. To see how to tell the difference between clouds and snow cover on standard visible imagery, check out the Case Study below, after reviewing the following summary highlighting the important characteristics of visible satellite imagery:

Visible satellite imagery...

  • is based on the albedo of objects (the fraction of incoming sunlight that is reflected to the satellite).
  • can tell you about the thickness of clouds (thicker clouds have higher albedos and appear brighter than thinner clouds, which have lower albedos), but only general inferences can sometimes be made about a cloud's altitude
  • can be used to distinguish between snow cover and clouds, given that surface features such as lakes and rivers can be observed (see Case Study below)
  • is not able to detect clouds (or anything else) during the satellite's local night (visible imagery requires sunlight).
  • is not useful for determining whether precipitation is present under the observed clouds.

Case Study...

Snow Cover or Clouds?

Since snow cover and clouds can have very similar albedos, distinguishing between them on visible satellite imagery can sometimes be tricky. Check out the short video below (3:04), which demonstrates some ways to tell the difference.

PRESENTER: Both clouds and snow cover have a high albedo, and can appear in similar shades of white on visible satellite imagery, so let’s go over some ways to distinguish between the two. For starters, regions of snow cover often reveal details of the local terrain, which appear somewhat darker.

On this visible satellite image, we can see this swath of white shading from Ohio through northern Pennsylvania and into New York and New England to the north of this line, but the fact that we can pick out some surface features indicates that this is snow cover, not cloud cover. We can see the unfrozen Finger Lakes in New York, which have a much lower albedo since snow did not accumulate on the water. Lakes Erie and Ontario were largely unfrozen, too, and that gives a nice contrast between the low albedo of the water, which appears dark, next to the higher albedo of the snow cover on the ground, which appears brighter.

We can also pick out heavily forested regions because deciduous and coniferous forests also appear dark on visible imagery. Regions with dense forests mask the high albedo of the underlying snowpack because trees often lose the snow that accumulates on their limbs fairly quickly, so the satellite sees the canopy of trees instead of the snowpack on the ground. The heavily forested Adirondack Mountains in New York really stick out, as do some forested areas in northern Pennsylvania. Farther to the west into northeastern Ohio, the more agricultural landscape appears brighter because there are fewer trees and the satellite sees the high-albedo snowpack better.

Of course, if you have a loop of visible satellite images, distinguishing snow cover from clouds is even easier because snow cover doesn’t move, but clouds do. If we look at this loop which spans from about 14Z to 1630Z, you can see clouds streaming over Ohio and Michigan into western Pennsylvania and New York. The leading edge of this cloud cover looks pretty wispy and not very bright, and we can still make out some of the snow cover beneath it, suggesting that these are thin cirrus clouds. If you look closely, you can even see some linear features within the cirrus, indicative of airplane contrails. The clouds entering the left side at the end of the loop into northwest Ohio appear brighter and have a higher albedo, indicating that they are thicker than the cirrus streaming ahead of them.

Visible satellite imagery is a great tool for discerning cloud thickness, and identifying areas of snow cover when clouds aren’t too prevalent. I hope this video helps you with your interpretations of visible satellite imagery.

Ultimately, by carefully studying the visual cues of terrain features or watching the movement of clouds on a loop, you can usually successfully discern clouds from snow cover on visible imagery. But, by utilizing more wavelengths of the electromagnetic spectrum, we can really change the "look" of clouds and snow cover. For more details, along with a list of useful resources for accessing satellite images, check out the Explore Further section below.

Explore Further...

Key Data Resources

Studying satellite images should be an integral part of any forecaster's daily routine, so if you're interested in starting to explore satellite images online, I recommend the resources below. Just keep in mind that you'll encounter a lot of different types of satellite images on these pages. We'll learn about some of them soon. Others are beyond the scope of this course, but you're welcome to investigate on your own!

Snow Cover and Clouds on Multi-Channel Imagery

When information collected at multiple wavelengths of the electromagnetic spectrum is combined into a single image (a "multi-channel" or "multi-spectral" approach), forecasters can sometimes gain more insight than they can by looking at a satellite image created using a single wavelength. The short video below (2:20) shows an example of using three wavelengths to more easily discern clouds from snow cover. If you're interested in learning more about this satellite product, check out this "Quick Guide (opens in a new window)" detailing how it's created and how to interpret it.

PRESENTER: Discerning between high-albedo surfaces like clouds and snow cover can sometimes be tricky with standard visible imagery. We’re left to track the movement of clouds on loops or identify snow cover by picking out surface features with lower albedo like unfrozen bodies of water or heavily forested areas.

Now let’s take a different, more colorful look at this loop. This loop was created by expanding beyond just the visible portion of the electromagnetic spectrum. This particular satellite product is created by combining data collected at 3 different wavelengths – one in the visible portion of the spectrum, one just outside the visible portion in the near-infrared, and one in the infrared. By assigning different colors to the information gathered at each wavelength, snow cover and clouds appear differently, which makes it easier to discern between them.

On this particular image, the visible channel is detecting the albedo of various objects, but instead of white, it’s displayed in a green shading. The near-infrared channel is shaded blue and is useful for distinguishing clouds composed mainly of liquid drops from those composed of ice crystals. Finally, the infrared channel is shaded red, and relates to temperature of the object being detected.

When combining all of this information into one image, areas of snow coverage show up in green, while clouds tend to show up in various other shades, depending on how cold their tops are and whether they’re composed mainly of liquid or ice. These cirrus clouds advancing into Pennsylvania and West Virginia from the west appear sort of pinkish because they’re very high and cold, and are composed of ice crystals. Meanwhile, most of these clouds out over the Atlantic are cyan colored because they are lower and composed of liquid drops.

While exact shadings can vary based on several factors, using multiple wavelengths can give us more insights than just using one channel, and this type of imagery has a number of applications in addition to just distinguishing between snow cover and clouds. It can be useful for studying growing cumulus clouds as they become increasingly composed of ice, and can be used to track heavy snow squalls in areas that have poor radar coverage, among other things.

mjg8

Infrared Satellite Imagery

Infrared Satellite Imagery

Prioritize...

After reading this section, you should be able to describe what is displayed on infrared satellite imagery, and describe the connection between cloud-top temperature retrieved by satellite and cloud-top height. You should also be able to discuss the key assumption about vertical temperature variation in the atmosphere that meteorologists make when interpreting infrared imagery. Finally, it is important that you be able to differentiate an IR image from visible, water vapor, and radar imagery. This skill involves knowing what clues distinguish one type of imagery from another.

Read...

Visible satellite imagery is of great use to meteorologists, and for the most part, its interpretation is fairly intuitive. After all, the interpretation of visible imagery somewhat mimics what human eyes would see if they had a personal view of the earth from space. But, visible satellite imagery also has its limitations: It's not very useful at night, and it only tells us about how thick (or thin) clouds are.

By limiting our "vision" only to the visible part of the spectrum, we diminish our ability to describe the atmosphere accurately. Consider the images below. The image on the left shows a photo (which uses the visible portion of the spectrum) of a man holding a black plastic trash bag. On the right is an infrared (IR) image of that same man. Notice that switching to infrared radiation gives us more information (we can see his hands) than we had just using visible light. Furthermore, the fact that the shading in the infrared image is very different from the visible image suggests that perhaps we can gain different information from this new "look."

Two photos of a man, one using visible light, and one using infrared emissions.

Looking at the same image in both the visible and infrared portions of the electromagnetic spectrum provides insights that a single image cannot. Likewise with remote sensing of the atmosphere. By gathering data at multiple wavelengths, we gain a more complete picture of the state of the atmosphere.
Credit: NASA/JPL-Caltech/R. Hurt (SSC)

Before we delve into what we can learn from infrared satellite imagery, we need to discuss what an infrared satellite image is actually displaying. Just like visible images, infrared images are captured by a radiometer tuned to a specific wavelength. Returning to our atmospheric absorption chart (opens in a new window), we see that between roughly 10 microns and 13 microns, there's very little absorption of infrared radiation by the atmosphere. In other words, infrared radiation at these wavelengths emitted by the earth's surface, or by other objects like clouds, gets transmitted to the satellite with very little absorption along the way.

You may recall from our previous lesson on radiation that the amount of radiation an object emits is tied to its temperature. Warmer objects emit more radiation than colder objects. So, using the mathematics behind the laws of radiation (namely Kirchhoff's Law and Planck's Law), computers can convert the amount of infrared radiation received by the satellite to a temperature (formally called a "brightness temperature" even though it has nothing to do with how bright an object looks to human eyes). Finally, these temperatures are converted to a shade of gray or white (or a color, as you're about to see), to create an infrared satellite image. Conventionally, lower temperatures (colder objects) are represented by brighter shades of gray and white, while higher temperatures (warmer objects) are represented by darker shades of gray.

One challenge of working with infrared images is that they can "look" very different, even if they're displaying the exact same data. Some infrared images use grayscale so that they resemble visible images (like the first example in the slideshow below), while others include all the colors of the rainbow! Infrared images that contain different color schemes are usually called enhanced infrared images, not because they are "better," but because the color scheme highlights a particular feature on the image (usually very low temperatures). Click through the slideshow below to see a few examples. All four images in the slideshow display the exact same data; there's really no fundamental difference between a "regular" (grayscale) infrared image and an enhanced infrared image even though different color schemes change the look of the image. The key with any IR image is to locate the temperature-color scale (opens in a new window) (usually along the top, side, or bottom of the image) and match the shading to whatever feature you're looking at.

Four corresponding infrared satellite images with differing color schemes. The "traditional" infrared image is shown first. Toggle through the other images to see various "enhanced" infrared images which contain colors that mark certain key temperature ranges (in this case very low temperatures).
Credit: University of Wisconsin / SSEC

So, we know that an infrared radiometer aboard a satellite measures the intensity of radiation and converts it to a temperature, but what temperature are we measuring? Well, because atmospheric gases don't absorb much radiation between about 10 microns and 13 microns, infrared radiation at these wavelengths mostly gets a "free pass" through the clear air. This means that for a cloudless sky, we are simply seeing the temperature of the earth's surface. To see what I mean, check out this loop of infrared images of the Sahara Desert (opens in a new window) in northern Africa. Note the very dramatic changes in ground temperatures from night (light gray ground) to day (black ground) and back to night again. This is because dramatic diurnal changes in ground temperatures (opens in a new window) often occur over the deserts, where the broiling sun bakes the earth's surface by day. At night, however, the desert floor often cools off rapidly after sunset.

Of course, sometimes clouds block the satellite's view of the surface; so what's being displayed in cloudy areas? Well, while atmospheric gases absorb very little infrared radiation at these wavelengths (and thus emit very little by Kirchhoff's Law), that's not the case for liquid water and ice, which emit very efficiently at these wavelengths. Therefore, any clouds that are in the view of the satellite will be emitting infrared radiation consistent with their temperatures. Furthermore, infrared radiation emitted by the earth's surface is completely absorbed by the clouds above it (opens in a new window). So, even though there is plenty of IR radiation coming from below the cloud and even from within the cloud itself, the only radiation that reaches the satellite is from the cloud top. Therefore, IR imagery is the display of either cloud-top temperatures or the Earth's surface temperature (if no clouds are present).

A lush field with a snow-capped mountain in the background.

The backdrop of snow-capped Mauna Kea (which means "White Mountain" in the Hawaiian language) against the lush, grazing grass removes any doubt about the validity of the observation that temperature usually decreases with increasing altitude.
Credit: Karyl-Ann Ah Hee

So, infrared imagery can tell us the temperature of the cloud tops, but how is that useful? Well, if we make the simple assumption that temperature decreases with increasing height in the lower atmosphere (that is, the troposphere), then we can equate cloud-top temperature to cloud-top heights. In other words, clouds with very cold tops have high-altitude cloud tops (for example: cirrostratus, cirrocumulus, cumulonimbus). Clouds (such as stratus, stratocumulus, or cumulus) with warmer tops have tops that reside at a low altitude.

Given that infrared imagery can tell us about the altitude of cloud tops, and visible imagery can tell us about the thickness of clouds, meteorologists use both types of images in tandem. Using them together makes for a powerful combination that helps to specifically identify types of clouds. Let's apply this quick summary to a real case so I can drive home this point using the short video below (2:39).

PRESENTER: Let’s use these side-by-side visible and infrared images to see how weather forecasters use both types of images to diagnose cloud types. Even though these images look pretty similar at first glance, they’re displaying very different things. Visible satellite imagery is most like what we see with our eyes. It’s based on the amount of visible light that gets reflected back to the satellite. But, it’s critical to realize that infrared imagery is different. It’s showing us temperature, either of cloud tops or the earth’s surface. Note that even though no temperature scale is shown on the infrared image, brighter shades of gray and white correspond to lower temperatures, as is typically the case.

Let’s start by looking at Point A, which is located in the line of bright white clouds extending from the Outer Banks of North Carolina down into Florida. Their brightness on visible imagery indicates that these are thick clouds. These clouds also appear bright on infrared imagery, so they have cold tops, indicating that the tops are high in the troposphere. Thus, given that these clouds are thick and have cold tops, we can assume that they are cumulonimbus, which can have tops reaching altitudes upwards of 60,000 feet.

Now let’s look at Point B, located in the area of "feathery" clouds over the Atlantic. Obviously, these feathery clouds are not as bright as the area of cumulonimbus on visible imagery, which means the clouds at Point B are much thinner. On the infrared image, these thin clouds appear bright white, meaning that they have cold tops, which are high in the troposphere. Therefore, they must be cirrus clouds, which are high and thin. I should add the caveat that sometimes when clouds have very thin spots, infrared radiation from the earth's surface can leak through holes in the clouds and reach the satellite. That bit of extra radiation from the warm earth can make the tops of very thin clouds appear a little warmer and lower than they really are.

Finally, let’s turn our attention to Point C, which is located in the region of clouds over the Great Lakes and upper Ohio Valley. The darker grayish appearance on infrared imagery tells us that they're low clouds with warm tops. These clouds are fairly bright on the visible image, meaning that they must be moderately thick. Given the somewhat "cellular" nature and breaks in between blobs of clouds, these are likely stratocumulus clouds, although farther north in the Great Lakes there's likely a more solid deck of stratus.

The lesson learned here is that both visible and infrared imagery can be used together to identify cloud types during the daytime.

While both visible and infrared imagery can be used together to identify cloud types during the daytime, at night, routine visible imagery is not feasible, so weather forecasters must rely almost exclusively on infrared imagery. Though infrared imagery is indispensable at night, it has some drawbacks. Detecting nighttime low clouds and fog can be tantamount to impossible because the radiating temperatures of the tops of low clouds and fog are often nearly the same as nearby ground where stratus clouds haven't formed.

The Challenges of Infrared Images

To learn more about the shortcomings of IR images at night and to review what you've already learned in this section check out this short video (2:22) showing an infrared satellite simulator (opens in a new window) (video transcript (opens in a new window)). As the video demonstrates, in cases where our assumption about temperatures decreasing with increasing height breaks down, the appearance of infrared images might not be what we expect. By the way, I encourage you to give the infrared imagery simulator (opens in a new window) a try for yourself. I suggest trying a few different hypothetical situations as in the video to see how they might look on infrared imagery, which can help you see what factors can affect the appearance of infrared satellite images.

One of the scenarios shown in the video is something that you might encounter at night or early in the morning: The ground in cloud-free areas can sometimes actually be colder than the tops of nearby low clouds, and it can cause IR images to look a bit strange. Take a look at the image below, collected at 1315Z on a February morning. Keep in mind that 1315Z is 7:15 AM Central Time in February (right around sunrise). Focus your attention on the slightly darker patch that's circled. Given that it's darker (and warmer), we must be looking at bare ground, right? Now toggle the slideshow to the visible image from about one hour later (when there was enough sunlight for a visible image).

An infrared satellite image collected at 1315Z on a late February day. The dark patch over northern Texas and Oklahoma (circled on the IR image) represents low clouds and fog, as is evident from the visible image from one hour later (toggle the slideshow to see the visible image). The surrounding lighter areas on the infrared image are characteristic of ground which has cooled to below the temperature of the low cloud tops.
Credit: NCAR

The visible image shows a bank of low clouds and fog where the darker shading was located on the infrared image. So, why did those low clouds and fog appear darker than their surroundings on the infrared image? Their tops were actually warmer than the surrounding bare ground in areas with clear skies. The map of regional station models from 1343Z (opens in a new window) shows that it was very chilly in the area of the Texas and Oklahoma panhandles where skies were clear. In other words, this situation violated our assumption that temperatures decrease with increasing height in the troposphere. We'll explore the reasons why these exceptions exist later in the course, but ground temperatures overnight in the cold season are often colder than overlying air. The time of the infrared image is 1315Z (right around sunrise), which is near the time when ground temperatures are often at their lowest (and it's most likely for surrounding ground to be colder than nearby cloud low cloud tops).

On the other hand, it can also be easy to assume that colors equating to low temperatures must mean we're looking at high, cold cloud tops. While that's usually the case, take a look at this enhanced infrared image from 13Z on December 23, 2022 (opens in a new window). The entire northern United States is awash in colors indicating temperatures of -20 degrees Celsius (-4 degrees Fahrenheit) or lower. So, do all the colors represent high, cold cloud tops? Nope! You're looking at very cold ground in much of the north-central U.S. and into the Midwest. The clue that the colored area isn't all clouds is that we can see surface features (opens in a new window) -- the unfrozen Missouri and Illinois Rivers appears warmer than surroundings, as do several cities, such as Madison, Wisconsin. An outbreak of frigid air caused the ground to be so cold that it met the threshold to be colorized on this particular image!

The bottom line here is that you have to be careful when examining IR imagery, especially in cases where you're dealing with low clouds and/or the ground is very cold. While the assumption that temperatures decrease with increasing height in the troposphere is usually correct, exceptions do exist! Just remember that you are looking at temperatures and that lighter gray or coloring doesn't always mean cloudy skies. There are methods for detecting low clouds, which involve subtracting data collected at different IR wavelengths to extract only the low cloud field (if you're interested in seeing an example, check out the Explore Further section below).

This concludes our discussion of infrared satellite imagery. Now it's time to tackle water vapor imagery. But first, review the key points from this section.

Infrared satellite imagery...

  • is based on the fact that measuring an object's infrared emission tells you something about its temperature.
  • displays the temperature of either cloud tops or the earth's surface (if the sky is clear).
  • can be combined with the assumption that temperature decreases with increasing height to allow cloud-top heights to be determined. Lower temperatures typically mean higher cloud tops.
  • is not able to give any direct indication of cloud thickness or the presence of precipitation (although inferences can be made in some cases).
  • should not be confused with radar imagery. Inexperienced forecasters sometimes confuse enhanced infrared satellite images (opens in a new window) with similarly colored radar images (opens in a new window). If you are uncertain, look at the color key (an infrared image will always have units of temperature).

Explore Further...

As you learned in this section, one of infrared imagery's main advantages is that it's useful at night, but one of the challenges of interpreting IR images at night is that the tops of low clouds or fog can sometimes have similar temperatures as the surface of the earth in surrounding areas where it's not cloudy. In these situations, it can be difficult or impossible to pick out the areas of low clouds or fog with conventional infrared imagery, but subtracting data at different infrared wavelengths can be help us with this problem. For an example, check out the short video below (2:30). If you're interested in learning more about the satellite product featured in this video, called the "Nighttime Microphysics RGB," check out this quick guide (opens in a new window).

PRESENTER: Detection of low clouds and fog using infrared imagery can sometimes be tricky at night and early in the morning because one of the main assumptions that forecasters use when interpreting infrared images – that temperatures decrease with increasing height – isn’t always true.

Take this enhanced infrared image as an example. Assuming that temperatures decrease with increasing height might lead us to believe that this dark area has clear skies, meaning that the satellite is seeing emissions from the relatively warm ground, while the lighter shaded areas, which are colder, represent cloud cover.

But, that’s not the case at all. The brighter gray shaded areas actually have clear skies, and they appear colder on this enhanced infrared image because the ground is colder than the tops of the low clouds and fog in this area. For the record, these very brightly colored areas, actually do represent very cold cloud tops which are high in the troposphere.

Difficulty in discerning between low clouds or fog and clear skies on enhanced infrared imagery at night or early in the morning isn’t all that uncommon because the tops of low clouds can be warmer or have similar temperatures to the ground in surrounding areas with clear skies.

But, using multiple wavelengths of the electromagnetic spectrum gives forecasters another tool for more easily identifying low clouds or fog at night. This image was created by using multiple wavelengths from the infrared portion of the electromagnetic spectrum, differencing their contributions in order to better identify cloud thickness, composition, and temperature, and then applying different colors. Using this approach causes low clouds and fog to appear much more intuitively – we can see the area of low clouds across southeast Texas over into Louisiana and Arkansas in this whitish tan shading. The really high clouds to the northwest here now appear very dark, while the slice of cold ground in between appears pink.

Finally, once the sun rose on this particular day, traditional visible imagery confirmed our interpretation of the multi-channel approach – with a thick area of low clouds and fog, surrounded by clear skies. So, the multi-channel approach at night really made the interpretation of low clouds and fog much more intuitive compared to traditional infrared imagery.

mjg8

Water Vapor Imagery

Water Vapor Imagery

Prioritize...

Water vapor imagery can be a challenging topic! At the completion of this section, you should be able to...

  • describe what is displayed on water vapor satellite imagery and correctly interpret water vapor images.
  • explain the difference between using a wavelengths between roughly 6 and 7 microns versus 10-13 microns.
  • explain what is meant by the term "effective layer" and discuss the implications of a warm versus cold effective layer.
  • explain what information is not obtainable from a water vapor image and what features are almost never observed on such images.

As with the other sections on satellite imagery, it is important that you be able to differentiate a water vapor image from visible, traditional IR, and radar imagery. You should be able to point to certain clues that tell you that you are looking at a water vapor image and not one of the other types.

Read...

Our look at visible and infrared imagery has hopefully shown you that using a variety of wavelengths in remote sensing is helpful because this approach gives us a more complete picture of the state of the atmosphere. Meteorologists can use visible and infrared imagery to look at the structure and movement of clouds because these types of images are created using wavelengths at which the atmosphere absorbs very little radiation (so radiation reflected or emitted from clouds passes through the clear air to the satellite without much absorption). Now, what if we took the opposite approach? What if we looked at a portion of the infrared spectrum where atmospheric gases (namely water vapor) absorbed nearly all of the terrestrial radiation? Water vapor imagery uses this exact approach.

In case you didn't catch it in the paragraph above, let me be clear: Water vapor imagery is another form of infrared imagery, but instead of using wavelengths that pass through the atmosphere with little absorption (like traditional infrared imagery, which utilizes wavelengths between roughly 10 and 13 microns), water vapor imagery makes use of slightly shorter wavelengths between about 6 and 7 microns. As you can tell from our familiar atmospheric absorption chart (opens in a new window), these wavelengths are mostly absorbed by the atmosphere, and by water vapor in particular. Therefore, water vapor strongly emits at these wavelengths as well (according to Kirchoff's Law). Thus, even though water vapor is an invisible gas at visible wavelengths (our eyes can't see it) and at longer infrared wavelengths, the fact that it emits so readily between roughly 6 and 7 microns means the radiometer aboard the satellite can "see" it.

This fact makes the interpretation of water vapor imagery different than traditional infrared imagery (which is mainly used to identify and track clouds). Unlike clouds, water vapor is everywhere. Therefore, you will very rarely see the surface of the earth in a water vapor image (except perhaps during a very dry, very cold Arctic outbreak). Secondly, water vapor doesn't often have a hard upper boundary (like cloud tops). Water vapor is most highly concentrated in the lower atmosphere (due to gravity and proximity to source regions like large bodies of water), but the concentration tapers off at higher altitudes.

The fact that water vapor readily absorbs radiation between roughly 6 and 7 microns also raises an interesting question: Just where does the radiation that ultimately reaches the satellite originate from? The answer to that question is the effective layer, which is the highest altitude where there's appreciable water vapor. Above the effective layer, there is not enough water vapor to absorb the radiation emitted from below, nor is there enough emission of infrared radiation to be detected by the satellite. Any radiation emitted below the effective layer is simply absorbed by the water vapor above it.

In our previous discussion of traditional infrared imagery, I'm not sure if you realized that the radiation detected by the satellite only came from one distinct level in the atmosphere at a given point. If the column was clear, then the surface was detected; however, if the column contained clouds, then only the top-most layer of clouds was observed. The surfaces that emit the radiation that the satellite "sees" (highest cloud tops or the ground in the case of traditional IR imagery) are the "effective layers." A universal property of an effective layer is that only emissions from this layer are observed by the satellite. For a visual, consider emissions at a representative wavelength useful for traditional infrared imagery (10.7 microns, for example) from a cloudy atmospheric column (toward the left on the schematic below).

Schematic comparing traditional IR imagery to water vapor imagery.

At traditional infrared wavelengths (like 10.7 microns), the satellite either sees radiation from the ground or the tops of clouds (left). The level from which the satellite observation is derived is called the effective layer. For water vapor imagery (right), the effective layer is defined as the highest level of appreciable water vapor whose radiation can be detected by the satellite. As with traditional IR imagery, all radiation emitted below the effective layer is absorbed and does not reach the satellite.
Credit: David Babb

In the column with clouds, radiation emitted from the top of the cloud reaches the satellite because no appreciable liquid water or ice exists above the cloud, giving the radiation a "free pass" to the satellite. Below the observed cloud layer (that is, the effective layer), any emissions from liquid water and ice are absorbed by the cloud layer that lies above them. Of course, if the air column is free of clouds, then the ground is the effective layer at longer infrared wavelengths, because the emissions that the satellite radiometer sees are coming from the ground (column farthest to the left in the graphic above).

Now let's carry this idea over to water vapor imagery (refer to the right portion of the above schematic). At the wavelengths used for water vapor imagery (between roughly 6 and 7 microns), water vapor very effectively absorbs and emits radiation. Another way to think about it is that at a wavelength like 6.7 microns (the sample wavelength used in the schematic), water vapor radiates just like liquid water and ice do at 10.7 microns. So, water vapor is an invisible gas at visible wavelengths and longer infrared wavelengths, but it "glows" at wavelengths around 6 to 7 microns.

The bottom line is that, the effective layer is the source region for the radiation detected by the satellite. It's the highest layer of appreciable water vapor, and above the effective layer, there is not enough water vapor to generate a signal to be observed by the satellite. And as with clouds in the traditional IR example, any radiation emitted below the effective layer is simply absorbed by the water vapor above it. Therefore, the satellite measures the radiation coming only from the effective layer, and like traditional infrared imagery, this radiation intensity is converted to a temperature, which means water vapor imagery displays the temperature of the effective layer of water vapor, although not all images you'll find online will contain a specific color temperature scale. Commonly, water vapor imagery uses shades of gray, with warmer (lower) effective layers shown as dark and colder (higher) effective layers shown in white. Many sites will add color enhancements to identify key temperatures like with traditional infrared imagery, but color schemes vary from website to website.

You may hear on television or see other online explanations that suggest water vapor imagery measures the water vapor content of the atmosphere, but that's not really true. We can infer certain things about the moisture profile of the atmosphere based on the temperature of the effective layer, but the satellite isn't actually measuring the amount of water vapor present in order to create water vapor images, and it tells us nothing about water vapor below the effective layer. So, what can we infer by knowing the temperature of the effective layer? Check out the short video (2:43) below:

PRESENTER: We have here a color-enhanced water vapor image, and we’re going to see how to interpret this image. First, let’s get our bearings with the color scale along the bottom. Lower temperatures are color coded in pinks, blues, greens, and purples. Meanwhile, higher temperatures are either in shades of gray or in orange or red for the highest temperatures on this particular image – color schemes can vary, though from website to website.

If we make the same assumption we did with traditional infrared imagery – that temperature decreases with increasing height in the troposphere, then we can make meaning out of these temperatures. Basically, a colder effective layer means the effective layer is higher in the troposphere, and if we know the height of the effective layer, we can infer the depth of the dry air above it. With water vapor imagery, we can’t assume anything about what lies below the effective layer because all of the emissions from below are being absorbed by the effective layer.

So, let’s start with one of the warmer effective layers on this map – over eastern Texas in the dark gray shading. Our color scale tells us that the temperature of the effective layer is approaching -20 degrees Celsius. Using another tool, I looked up the temperature profile in this region at the time, and this temperature corresponded to a height a little above 20,000 feet, which is in the middle part of the troposphere. So, we can infer that the upper troposphere was dry here because all the meaningful water vapor was roughly 20,000 feet and below.

Now let’s pick a point here in eastern Kansas, where there’s more of a grayish white shading, which corresponds to about -35 degrees Celsius. Again, looking up the temperature profile, this temperature corresponded to a height of almost 30,000 feet, which is the upper troposphere, so we can conclude that there was more water vapor in the upper troposphere over eastern Kansas than there was over east Texas.

This area near the Kansas / Nebraska border has some of the lowest temperatures on the map – a very cold effective layer of around -60 degrees Celsius. On this date, that temperature was up near 40,000 feet, at the very top of the troposphere. Such a cold, high effective layer can only be caused by high ice clouds typical of the tops of cumulonimbus clouds. I should point out that at such low temperatures very little water exists in the vapor phase. However, ice crystals also have a fairly strong emission signature between 6 and 7 microns, so if you see such cold effective layers (say less than about -45 degrees Celsius or so), you are most likely looking at ice clouds (like cirrus, cirrostratus, or cumulonimbus tops) rather than at just water vapor. And, in fact in this case, this was an area of budding thunderstorms.

Credit: Penn State

In the video, did you notice that the highest effective layer we observed was at the top of the troposphere, near 40,000 feet, and was actually most likely emissions from ice crystals (ice crystals also emit very effectively between 6 and 7 microns) in the tops of cumulonimbus clouds? Meanwhile, the lowest effective layer that we observed was near 20,000 feet? That's not uncommon. Because emissions from water vapor near the earth's surface are absorbed by water vapor higher up, it's often impossible to detect features at very low altitudes. In other words, low clouds (stratus, stratocumulus, nimbostratus, and fair weather cumulus) are rarely observable on water vapor imagery.

To see what I mean, check out the pair of satellite images below (infrared on the left, water vapor on the right). The yellow dot represents Corpus Christi, Texas, which was shrouded in low clouds (gray shading on the infrared image -- check out the meteogram for Corpus Christi (opens in a new window)). Now examine the water vapor image. This image uses traditional grayscale, so the dark shading on the water vapor image indicates a warm effective layer located in the middle troposphere. However, we can't see even a hint of low clouds! In this case, the effective layer (located above the low clouds) absorbed all of the radiation emitted from below, rendering the low clouds undetectable on the water vapor image. For another example of low clouds not appearing on water vapor imagery, check out the Case Study section below.

A comparison of water vapor and IR images for a location along the Texas coast.

An infrared image (left) shows a blob of low clouds (in gray) over the western Gulf of Mexico and the Texas Seaboard. But there are seemingly no clouds evident in the water vapor image (right). The dark shading on the water vapor image indicates that the effective layer lies in the mid-troposphere (above the low clouds); therefore, radiation emitted by liquid water and water vapor in the tops of the low clouds was absorbed by water vapor higher up and never reached the satellite.
Credit: NOAA

How Low Can Water Vapor Imagery Go?

If you look back carefully at our familiar atmospheric absorption spectrum (opens in a new window), notice that absorption (and therefore emission) by water vapor isn't uniform in the range of wavelengths used for water vapor images (roughly 6 to 7 microns). Indeed, toward the higher end of the range, absorption is less than 100 percent, and using the different absorption and emission properties of water vapor near 7 microns allows satellites to "see" effective layers in different layers of the troposphere. Therefore, you'll sometimes find water vapor images labeled "upper-level", "mid-level" or "lower-level." While the altitude of the effective layer on any of these images varies based on the amount of water vapor in an air column (and how it's distributed), make sure that you're not fooled by these names. Even "lower-level" water vapor imagery typically detects effective layers between roughly 7,500 feet and 18,000 feet. In other words, most often, you're looking at emissions from effective layers of water vapor in the middle troposphere, even on so-called "lower-level" water vapor imagery.

Therefore, even "lower-level" water vapor imagery still can't often detect surface water vapor or the presence of low clouds. For example, check out this side-by-side comparison of a visible image and lower-level water vapor image (opens in a new window). On this water vapor image, shades of yellow and orange mark regions with a warmer effective layer. Note that the lower-level water vapor image provides no indication of the presence of low clouds whatsoever (especially notable over Illinois and Indiana), because their tops were located below the effective layer at this time (their emissions were absorbed by water vapor higher up). The bottom line is that even on "lower-level" water vapor images, you cannot see near-surface water vapor, fog, or low clouds, unless the atmospheric is extremely dry higher up (which is only possible in very cold, dry Arctic air).

Smoke streaming away from an extinguished candle.

Much like smoke from an extinguished candle, water vapor imagery helps forecasters trace mid- or upper-level winds.

Now that we've discussed how to interpret water vapor imagery, what might we use it for? Forecasters most often use water vapor imagery to visualize upper-level circulations in the absence of clouds. This is because water vapor is transported horizontally by high-altitude winds and thus can act like a tracer, much like smoke from an extinguished candle (as in the photo on the right). Consider this enhanced IR satellite loop (opens in a new window) and focus your attention on the Southwest. Since there are no clouds present, we can't really tell how the air is moving over this region. Now, check out the corresponding loop of water vapor images (opens in a new window) and focus your attention on the same area. What do you see? Do you notice the ever-so-slight counter-clockwise circulation of the air off the California coast? Such upper-level circulations are in fact important, as we will learn later in this course. The lesson learned here is that we were able to identify this circulation only with the aid of water vapor imagery.

Water vapor imagery's ability to trace upper-level winds ultimately allows forecasters to visualize upper-level winds, and computers can use water vapor imagery to approximate the entire upper-level wind field. Here's an example of such "satellite-derived winds (opens in a new window)" in the middle and upper atmosphere at 12Z on September 28, 2022 (toward the left side of the image, you can see Hurricane Ian about to make landfall in Florida). Having such observations over the data-sparse oceans is extremely valuable to forecasters, and much of this information gets put into computer models so that they better simulate the initial state of the atmosphere, which leads to better forecasts than if we didn't have these observations.

This concludes our look at the three most common types of satellite imagery. Before moving on to radar imagery, take a moment to review the key points about water vapor imagery as well as the Case Study below.

Water Vapor satellite imagery...

  • uses infrared radiation; except unlike traditional infrared imagery, it uses wavelengths at which water vapor strongly emits and absorbs infrared radiation.
  • displays the temperature of the effective layer of water vapor. Warm effective layers mean that upper troposphere and possibly parts of the middle troposphere are "dry" (they contain very little water vapor). By comparison, colder effective layers indicate a higher concentration of water vapor and/or ice clouds in the upper troposphere.
  • is not able to give any measure of the atmospheric water vapor content below the effective layer.
  • usually does not show the presence of low clouds or water vapor near the surface. These almost always lie below the effective layer.
  • is used to trace air motions in the middle and upper troposphere, even in areas with no clouds.

Note that you may find water vapor images that lack a color temperature scale, or may use a color scale with general references to moist and dry. (opens in a new window) These references typically apply to the upper troposphere since the "dry" areas have a lower (warmer) effective layer that resides somewhere in the middle troposphere.

Case Study...

You saw some cases above showing that water vapor imagery typically does not show the presence of low clouds or water vapor near the surface. Check out the short video below (2:03) for another example -- this time in an extremely moist low-level environment.

PRESENTER: It’s important to remember that water vapor imagery very rarely gives us insights about surface or near-surface moisture. For example, check out this water vapor image of North America and the western Atlantic Ocean in the image on the left, and focus in on the Caribbean Sea. Note the general dark shading in the region, indicating a relatively warm effective layer and a dry upper atmosphere. The zoomed in version on the right focusing on Puerto Rico, Hispaniola, and much of the Caribbean Sea gives us a better look at exactly where the dark shading is located. It certainly includes Puerto Rico and Hispaniola.

But, don’t let the dark shading cause you to conclude that the entire air column is dry. Adding surface station models to the water vapor image shows surface dew points of 72 degrees at these stations in the Dominican Republic and Puerto Rico. So, concentrations of water vapor near the surface are quite high – the low-level air mass is moist, but you would never know it from the appearance of the water vapor image because radiation from the large amounts of water vapor near the surface is absorbed by water vapor higher up in the middle regions of the atmosphere.

Furthermore, the station models indicate varying degrees of partly cloudy skies. The clouds that were present were fair-weather cumulus clouds – shallow puffy clouds that often dot the tropical sky. They usually have tops that are only several thousand feet above the ground, and radiation from the tops of these clouds was being absorbed by water vapor above, which cloaks these low-topped clouds from the satellite radiometer’s view.

Rare exceptions do occur, when water vapor from the lower troposphere does appear on water vapor images. That can sometimes occur when columns of air are extremely dry, and there’s not enough water vapor in the middle or upper troposphere to absorb emissions from water vapor near the surface or from the tops of low clouds, but typically indications of water vapor near the surface or low topped clouds do not appear on water vapor images.

mjg8

Radar, Part 1: How Radar Works

Radar, Part 1: How Radar Works

Prioritize...

After reading this section, you should be able to describe how a radar works and what portion of the electromagnetic spectrum that modern radars use. You should also be able to define the term "reflectivity" as well as its units. Furthermore, you should be able to explain how a radar locates a particular signal and describe concepts such as beam elevation and ground clutter. Finally, after completing the other sections detailing the various types of satellite imagery, you should be able to distinguish between radar imagery and satellite imagery (especially similarly-colored infrared images).

Read...

The ancestry for modern radar can be traced all the way back to the late 1800s and German physicist Heinrich Hertz's work on radio waves (radar is actually an acronym for RAdio Detection And Ranging). History buffs may be interested in this tracing of the family tree of radar (opens in a new window), but the advent of using radar to detect precipitation began early in World War II. The United States, in a joint effort with Great Britain, advanced the design of radar by using microwaves, which, as you may recall, have a shorter wavelength than radio waves.

This shift to shorter wavelengths provided more precision in detecting and locating objects relative to the microwave transmitter. Without realizing it, the shift from radio waves to microwaves paved the way for using radar to detect the presence and range of not only enemy aircraft, but squadrons of airborne raindrops, ice pellets, hailstones or snowflakes as well. Like generations on a family tree, the patriarch World War II radars, which were used to detect precipitation as a wartime afterthought, were the forefathers of the WSR-57 radars utilized by the National Weather Service (WSR stands for "Weather Surveillance Radar" and the "57" refers to 1957, the first year they became operational). This image, taken from a WSR-57 radar (opens in a new window), which looks rather crude by modern standards, shows the pattern of precipitation in Hurricane Carla near the Texas Coast on September 10, 1961. The yellow arrow in the north-east quadrant of the storm points to the location where a tornado occurred near Kaplan, Louisiana.

The next generation of radars, appropriately tagged with the acronym, NEXRAD for NEXt Generation RADars, became operational in 1988, and are still in use today. Weather forecasters often refer to one of these radars as a WSR-88D. The "WSR" is short for "Weather Surveillance Radar," the "88" refers to the year this type of radar became operational and the "D" stands for "Doppler," indicating the radar's capability of sensing horizontal wind speed and direction relative to the radar.

So, ultimately, how do radars work? Well, for starters, radar is an active remote sensor, unlike the satellite-based sensors we've just covered. While radiometers sit aboard satellites orbiting in space and passively accept the radiation that comes their way from Earth and the atmosphere, the antenna of a WSR-88D (opens in a new window), housed inside a dome, (opens in a new window) transmits pulses of microwaves at wavelengths near 10 centimeters. Once the radar transmits a pulse of microwaves, any airborne particle lying within the path of the transmitted microwaves (e.g. bugs, birds, raindrops, hailstones, snowflakes, ice pellets, etc.) scatters microwaves in all directions. Some of this microwave radiation is back-scattered or "reflected" back to the antenna, which "listens" for "echoes" of microwaves returning from airborne targets (see the animation below).

A map showing Ground IR on the southern plains of the United States

Pulses of microwave energy transmitted by a Doppler radar intercept airborne "targets" (precipitation particles, birds, bugs, etc.). Some of the energy back-scatters to the radar receiver, where the strength of the return signal and the time it took the transmitted signal to return are then processed and used to create images of radar reflectivity.
Credit: David Babb

The radar's routine of transmitting a pulse of microwaves, listening for an echo, and then transmitting the next pulse happens faster than a blink of an eye. Indeed, the radar transmits and listens at least a 1000 times each second. But, like a friend who's a good listener, the radar spends most of its time listening for echoes of returning microwave energy. In one hour, the radar transmits pulses of microwaves for a grand total of only seven seconds. It spends the other 59 minutes and 53 seconds listening for echoes from targets.

The radar's antenna has to have a really "good ear." Indeed, by the time a radar pulse scatters back to the radar antenna, it's only a relative whisper because the power typically drops to less than few milliwatts (after being sent out with a peak power of 100-500 kilowatts). These units of power are a bit cumbersome to work with, so meteorologists convert the power of the returning radar signal (in milliwatts) to an alternative measure of echo intensity that's appropriately called reflectivity with units of dBZ (which stands for "decibels of Z"), which is a logarithmic measure of reflectivity (check out this Wikipedia article (opens in a new window) if you want to learn more about dBZ). Without getting into too much detail here, the bottom line is that the value of dBZ increases as the strength (power) of the signal returning to the radar increases.

To pinpoint the position of an echo relative to the radar site (within the circular range of the radar), the target's linear distance and compass bearing (opens in a new window) from the radar must be determined. First, realize that the transmitted and returning signals travel at the speed of light, so by measuring the time of the "round trip" of the radar signal (from the time of transmission to the time it returns), the distance that a given target lies from the radar can be determined. For example, it takes less than two milliseconds for microwaves to race out a distance of 230 kilometers (143 miles) and zip back to the radar antenna (143 miles represents the standard range of radars operated by the National Weather Service, although they can "see" farther than that with less detail).

A representative image of radar reflectivity.

A representative image of radar reflectivity indicates the standard range (230 kilometers) of each of the single-site weather radars operated by the National Weather Service. Imagine the purplish line sweeping around and completing a circle such that each single-site image of radar reflectivity displays a "circle of echoes." The data for this image came from the radar at Oklahoma City, Oklahoma at 0045Z on May 7, 2024.
Credit: NCAR

How does the radar know the direction or bearing of the target relative to the radar? First, In order to "see" in all directions, the radar antenna rotates a full 360 degrees at a speed usually varying from 10 degrees to as much as 70 degrees per second. A computer keeps track of the direction that the antenna is pointing at all times, so when a signal is received, the computer calculates the reflectivity, figures out the angle and distance from the radar site, and plots a data point at the proper location on the map. Believe it or not, all of this happens in just a fraction of second!

To wrap up our discussion on the how radar works, we need to talk about how high in the atmosphere radar signals come from. A common misconception is that all radar signals come from rain (and other targets) near the ground, but this is incorrect because the radar typically does not transmit its signal parallel to the ground. Indeed, the standard angle of elevation is just 0.5 degrees above a horizontal line through the radar's antenna (see the schematic below); however, some NEXRAD units can scan at even smaller angles of elevation if local terrain allows. Either way, the radar "beam" (signal) is initially not much higher above the ground than the radar itself, but with increasing distance from the radar, the radar "beam" gets progressively higher above the ground (and its width increases). Check out the diagram below. At a 0.5 degree scanning angle and at distance of 120 km, the radar beam is over 1 km above the surface (nearly 3,300 ft). Near the maximum range of 230 km, the radar beam is at twice that altitude.

Graphic to show the height and width of a radar and how they increase with increasing distance from the radar site. See text for more information.

The height and width of a radar "beam" increase with increasing distance from a given radar site (assuming the Earth is flat). For a NEXRAD base elevation scan of 0.5 degrees, a close approximation for the variation in the height of beam (above ground) is a rise of one kilometer for every 120 kilometers in horizontal distance from the radar site.
Credit: David Babb

For simplicity, the calculations in the diagram above assume that the Earth is flat, and when accounting for the curvature of the Earth, the altitude of the radar beam at greater distances from the radar becomes even higher than the calculations above would suggest! What are the impacts of this increasing elevation with distance from the radar? First, you should realize that radar imagery often shows reflectivity from the precipitation targets within a cloud, and not necessarily what is falling out of the cloud. If you don't realize this fact, you can sometimes get confused when looking at radar imagery. For example, often when light precipitation falls into a layer of dry air below, it evaporates entirely before reaching the ground. Yet, it may look like it's precipitating on a radar image because the radar "sees" the precipitation at the level of the cloud.

Secondly, you should realize that radar signals are not typically obstructed by geography at distances more than, say, 25 miles from the radar (the beam is more than 1,100 feet off the ground at that point). The only exception to this rule is that there are certain locations, particularly in the western United States, where the tall mountains of the Rockies can block portions of the radar beam. Check out this image showing the coverage of the NEXRAD radars (opens in a new window) for the U.S. Note how some of the "circles of echoes" in the west look like somebody took a bite out of them. The irregular radar coverage over the western U.S. is a direct result of the mountainous terrain blocking some of the radar "beams."

At most sites, however, less than 25 miles from the radar site, a collection of stationary targets called "ground clutter" including buildings, hills, mountains, etc., frequently intercepts and back-scatters microwaves to the radar. Computers routinely filter out the common ground clutter so that radar images don't lend the impression that precipitation is always occurring around the radar site. To do this, radar images on clear days pinpoint surrounding buildings and hills, giving meteorologists a precipitation-free template to artificially filter out regular ground clutter. Still, you'll sometimes find ground clutter on radar images. For example, note the stationary echoes on this radar loop (opens in a new window) from the NEXRAD near State College, PA. While areas of actual rain showers move during the loop, the stationary echoes come from a wind farm (opens in a new window) atop one of the ridges of Central Pennsylvania.

So, now that you know how radar works, what determines the strength of the returning radar signal? And, how do you interpret the rainbow of colors on radar images? We'll cover these questions in the next section. Before continuing, however, please review these key facts about radar imagery.

Radar imagery...

  • originates from ground-based sensors (not from satellites) that actively emit pulses of radiation.
  • uses the microwave part of the electromagnetic spectrum (not the infrared).
  • usually displays the variable "reflectivity" (units dBZ) which is the measure of the amount of signal returned to the radar from the original transmitted pulse.
  • can help forecasters identify areas of precipitation.
  • cannot tell you anything about cloud top temperature, cloud height, or cloud thickness.

Explore Further...

There are many flavors of radar data available on the Internet (as well as on your mobile devices). Despite this variety, you should understand that the "raw" data all primarily comes from the same place -- the network of NEXRAD radars operated by the National Weather Service. Here are some websites to get you started...

NOAA/National Weather Service: National Radar Mosaic (opens in a new window)

NCAR Realtime Weather: Single-site, National Mosaic and 5-day archive (opens in a new window)

College of DuPage Radar: Includes both a national mosaic (opens in a new window), and single-site images. In the menu on the left, you can switch from the national mosaic to single-site radars via "Dual Pol NEXRAD". The single-site interface allows you to choose your location and product, even including scans from other elevation angles. Many of the products are beyond the scope of this course, but you're welcome to explore.

NEXRAD Data Inventory Search: If you're a real "data-hound" and want access to the full suite of archived radar data (opens in a new window), this site is for you! This site is not for the technical faint of heart, but you can retrieve all of the Level-2 and Level-3 data produced by the NEXRAD system. Needless to say, much of the data is beyond the scope of this course, but you're welcome to play with it. Note that you will also need to download/install NOAA's Weather and Climate Toolkit (opens in a new window) to view the files.

mjg8

Radar, Part 2: Interpreting Radar Images

Radar, Part 2: Interpreting Radar Images

Prioritize...

At the completion of this section, you should be able to list and describe the three precipitation factors that affect radar reflectivity, and use them to interpret radar images. You should be able to explain why hail causes very large reflectivity values while snow tends to be under measured. You should also be able to explain the difference between "base reflectivity" and "composite reflectivity."

Read...

Now that you know how a radar works, we need to discuss how to properly interpret the returned radar signal. As with any remote sensing tool, we have to understand what factors influence the amount of radiation that is received by the instrument. As you recall, radar works via transmitted and returned microwave energy. The radar transmits a burst of microwaves and when this energy strikes an object, the energy is scattered in all directions. Some of that scattered energy returns to the radar and this returned energy is then converted to reflectivity (in dBZ). Ultimately, the intensity of the return echo (and therefore, reflectivity) depends on three main factors inside a volume of air probed by the radar "beam":

  • the size of the targets
  • the number of targets
  • the composition of the targets (raindrops, snowflakes, ice pellets, etc.)

Allow me to elaborate a bit on each of these factors impacting radar reflectivity. For starters, the size of the precipitation targets always matters. The larger the targets (raindrops, snowflakes, etc.,) the higher the reflectivity. By way of example, consider that raindrops, by virtue of their larger size, have a much higher radar reflectivity than drizzle drops (the tiny drops of water that appear to be more of a mist than rain). Secondly, the power returning from a sample volume of air with a large number of raindrops is greater than the power returning from an equal sample volume containing fewer raindrops (assuming, of course, that both sample volumes have the same sized drops). The saying that "there's power in numbers" certainly applies to radar imagery!

To see how the size and number of targets impact reflectivity, consider this example. Many thunderstorms often show high reflectivity on radar images, with passionate colors like deep reds marking areas within the storm with a large number of sizable raindrops. A large number of sizable raindrops falling from a cumulonimbus cloud also leads to high rainfall rates at the ground typically. Thus, high radar reflectivities are usually associated with heavy rain.

Radar reflectivity image from 1351Z on June 1, 2012.

The line of high reflectivity values approaching State College, PA denotes large numbers of large rain drops (often characteristic of thunderstorms).
Credit: NOAA

The radar image above shows a line of strong thunderstorms (called a "squall line") approaching State College, Pennsylvania from the northwest, with radar reflectivity exceeding 55 dBZ in some areas. Such high reflectivities are typically associated with very heavy rainfall, but inferring specific rainfall rates from radar images can be tricky business. A given reflectivity can translate to different rainfall rates, depending on, for example, whether there are a lot of small drops versus fewer large drops.

The presence of large hail (opens in a new window) in thunderstorms can really complicate the issue of inferring rainfall rates from radar reflectivity even more. Typically, radar reflectivity from a thunderstorm is greatest in the middle levels of the storm because large hailstones have started to melt as they fall earthward into air with temperatures greater than 0 degrees Celsius (the melting point of ice). Covered with a film of melt-water, these large hailstones look like giant raindrops to the radar and can have reflectivity values higher than 70 dBZ. The bottom line is that higher reflectivity usually corresponds to higher rainfall rates, but the connection is not always neat and tidy.

Okay, lets move on to the final controller of radar reflectivity -- composition. The intensity of the return signal from raindrops is approximately five times greater than the return from snowflakes that have comparable sizes. Snowflakes have inherently low reflectivity compared to raindrops, so it's easy to underestimate the area coverage and intensity of snowstorms if you're unaware of this fact. It might be snowing quite heavily, yet radar reflectivity from the heavy snow might be less than from a nearby area of rain (even if the rainfall isn't as heavy) because the return signal from raindrops is more intense.

There's another way that moderate to heavy snow falling within the range of the radar can be camouflaged. Indeed, precipitating stratiform clouds are often shallow (not very tall), which means that the radar beam will sometimes overshoot snow-bearing clouds (opens in a new window) located relatively far away from the radar site. To see what I mean, check out the short video (1:40) below.

PRESENTER: Let’s look at an example of how radar imagery can sometimes be misleading when snow is falling. This is a reflectivity image from the radar located in Cleveland, Ohio, and from this image, it might be tempting to think that heavy snow might be limited to here east of Cleveland where reflectivities are around 35 dBZ. At greater distances from the radar, reflectivity decreases to less than 10 dBZ at places like Toledo and Findlay.

But, because the precipitating stratiform clouds that produce snow are often shallow, the radar beam, which is increasing in elevation as it gets farther from the radar site, can sometimes overshoot snow-bearing clouds partially or entirely when they are located relatively far from the radar site. In other words, the radar scans the very tops of snow bearing clouds, where there are relatively few precipitation targets, or it misses them entirely, leading to either low reflectivity or no reflectivity at all.

Our radar image was from 12Z, and the meteogram from Findlay, Ohio showed that 12Z fell during a period of heavy snow in Findlay. So, it was snowing heavily at the time of our radar image.

Yet, our radar image showed reflectivity of less than 10 dBZ at Findlay. Findlay is located about 100 miles from the radar’s location in Cleveland, so it was far enough away that the radar beam was mostly overshooting the snow-bearing clouds, leading to conditions at the ground – heavy snow – that didn’t match our expectations from radar reflectivity.

Credit: Penn State

The fact that radar sometimes overshoots snow-bearing clouds can really challenge forecasters (sometimes with deadly consequences), as this short segment from Penn State's Weather World program (opens in a new window) illustrates (check it out if you're interested). To further complicate interpreting radar images, I point out that partially melted snowflakes present a completely different problem to weather forecasters during winter. When snowflakes melt, they melt at their edges first. With water distributed along the edges of the "arms" of melting flakes, partially melted snowflakes appear like large raindrops to the radar. Thus, partially melted snowflakes have unexpectedly high reflectivity. For much the same reason, wet or melting ice pellets (sleet) also have a relatively high reflectivity.

Therefore, during winter, radar images sometimes show a blob of high reflectivity embedded in an area of generally lower reflectivity. Often, this renegade echo of high reflectivity is partially melted snow or sleet, and it's a good idea to check surface observations to see whether the relatively intense echo is indeed partially melted snow or sleet, or an area of moderate to heavy rain. For example, check out this band of high reflectivity just south of St. Louis, Missouri (opens in a new window). Nearby Scott Air Force Base in Belleville, Illinois ("BLV" on the map) was in the midst of this band, and at the time of the radar image the Belleville meteogram (opens in a new window) showed a rather unusual current weather symbol, representing "snow pellets" (partially melted snowflakes that have refrozen). The bottom line is that forecasters must be careful interpreting radar images when snow might be falling.

Base Versus Composite Reflectivity

For a powerful thunderstorm that erupts fairly close to the radar, a scan at 0.5 degrees would likely intercept the storm below the level where the most intense reflectivity occurs. Such a single, shallow scan falls way short of painting a proper picture of the storm's potential. As a routine counter-measure, the radar tilts upward at increasingly large angles of elevation, scanning the entire thunderstorm like a diagnostic, full-body MRI.

The radar can tilt upward to angles of elevation as large as 19.5 degrees, as indicated in the figure below, which shows the elevation scans in a common "general surveillance" radar mode. But, the series of elevation scans shown below isn't the only option that National Weather Service NEXRAD units have; they are programmed with multiple scanning strategies to give forecasters the most useful data depending on the weather situation. A complete scan like the one shown below takes about 6 minutes, which means that under normal circumstances, forecasters must wait about 6 minutes to get a look at the newest radar scan at each elevation. But, during severe weather, forecasters desire more frequent low-elevation scans to better see what's happening in the lower parts of thunderstorms. So, the radar can be switched into "SAILS" mode (opens in a new window), which causes the radar to interrupt its scanning progression to give more low-level scans, providing forecasters with more frequent updates on the lowest elevation scan.

The elevation scans of the WSR-88D (NEXRAD) radar. More explanation in text.

The elevation scans of the WSR-88D (NEXRAD) in general surveillance mode.
Credit: NOAA's Radar Operations Center

On the image above showing how the radar can tilt upward at increasingly large angles, the numbers at the top represent the standard angles included as part of the general surveillance scan. Also note the colorful "beams," which represent the approximate width and length of the radar scan as a function of distance from the radar site. Again, note how wide the "beam" becomes at great distances from the radar.

Meteorologists describe the radar reflectivity derived from a single scan as base reflectivity, and the most common base reflectivity corresponds to the scanning angle of 0.5 degrees. The National Weather Service also provides images of composite reflectivity, which represents the highest reflectivity gleaned from all of the individual scan angles.

To see how one scan angle can have a higher reflectivity than another, consider the case of a severe thunderstorm. The storm's updraft, which is a fast, rising current of moist air that sustains the thunderstorm, is usually strong enough (25 meters per second or faster) to suspend a large amount of rain (and hail) aloft (opens in a new window). Meteorologists call the suspension of precipitation high in a thunderstorm precipitation loading. At this stage of the storm, the reflectivity high in the cumulonimbus cloud is much greater than the reflectivity lower in the cloud. So, a radar image created from composite reflectivity will likely display the higher dBZ level (more intense colors) than a radar image of base reflectivity. Eventually, of course, the rain intensity at lower altitudes (and the surface) will increase as rain and hail fall from the cloud (this will occur once the updraft can no longer support the weight of suspended water and ice).

For example, check out the image below. This graphic shows radar reflectivity plots of a garden-variety thunderstorm at four different scan angles. First, note that the core radar reflectivity on the upper-right panel (scan angle of 1.5 degrees) was higher than the core base reflectivity at 0.5 degrees (upper-left panel). Comparing the two images, we conclude that the heaviest precipitation was higher up in the thunderstorm at this time.

The radar reflectivity of a garden-variety thunderstorm at four different scan angles. More explanation in text.

The radar reflectivity of a garden-variety thunderstorm at four different scan angles. The upper-left panel shows the radar reflectivity at a scan angle of 0.5 degrees, the upper-right displays the radar reflectivity at a scan angle of 1.5 degrees, while the lower-left and lower-right panels correspond to scan angles of 2.4 degrees and 3.4 degrees respectively.
Credit: Used by permission, Gibson Ridge Software / National Weather Service

Note that the radar reflectivity markedly decreased at a scan angle of 2.4 degrees (lower-left panel). When the scan angle was set to 3.4 degrees (lower-right panel), the reflectivity all but vanished, indicating that there weren't many precipitation particles near the top of the storm.

Here's one last example of how composite reflectivity can be higher than base reflectivity. On August 30, 2023, Hurricane Idalia (opens in a new window) made landfall in northern Florida. The 1206Z composite reflectivity (on the left below) generally shows larger areas of 35 dBZ or more (yellows, oranges, and reds) compared to the corresponding base reflectivity image on the right. Furthermore, the base reflectivity image showed an area with no reflectivity within the storm's circulation southeast of the radar site, while composite reflectivity was as high as 20 to 30 dBZ in the same area (which the radar had detected during a higher elevation scan).

The composite and base reflectivities of Hurricane Idalia just after landfall in 2023.

(Left) The composite reflectivity of Hurricane Idalia just after landfall at 1206Z on August 30, 2023 from the radar in Talahassee, Florida. (Right) The base reflectivity at the same time. Note the much higher composite reflectivity in the area of the arrowhead.
Credit: National Weather Service

Composite reflectivity may not be representative of current precipitation rates at the ground, but it can show the potential if the precipitation causing the highest reflectivity (often well up into the cloud) can fall to the surface.You might think that this discussion is probably too much "inside baseball," but composite reflectivity is the mode of choice on regional or national mosaics (opens in a new window) that you frequently see on the Web, in mobile apps, and on television. So, the bottom line is to make sure that you know which type of radar product you are looking at before performing any kind of analysis.

Now you know the basics of interpreting radar imagery, and we're just about ready to wrap-up our lesson. Before you finish up, however, test your knowledge of basic concepts from this section in the Quiz Yourself section below. You may also be interested in the Explore Further section below, where you can find out more about some common radar products (precipitation-type images and satellite-radar composites) that you'll commonly encounter on television and online.

Quiz Yourself...

Feeling confident in your basic knowledge of radar interpretation? Take this quiz to see how you do. You'll need to apply these concepts on various assignments.

Explore Further...

Commonly, regional or national radar mosaics visually distinguish areas of rain from snow and mixed precipitation (any combination of snow, sleet, freezing rain, and/or rain) using different color keys. Note that rain, mixed precipitation, and snow each has its own color key in the regional radar mosaic below. While the exact methods for creating such images vary, they all start with radar reflectivity and often incorporate other radar products (opens in a new window) along with surface temperature and other lower tropospheric observations to give a "best guess" of precipitation type.

A radar image showing color-coded precipitation type. Georgetown, DE is highlighted on the map, located in the pink area of mixed precipitation.

A regional radar mosaic with color-coded precipitation type. Georgetown, DE is located within the pink stripe marking mixed precipitation.
Credit: WSI Corporation

The methods used to formulate this "best guess" for precipitation type aren't perfect, and not surprisingly sometimes the actual observed weather doesn't match the precipitation type shown on the radar image. For example, I've marked Georgetown, Delaware on the map, located within the pink stripe on the radar image, indicating that mixed precipitation was falling. But, the surface observations tell a different story. The Georgetown meteogram (opens in a new window) shows that light rain was falling at 15Z (the time of the radar image above).

Another common product is a satellite-radar composite, or "sat-rad image" (see image below). For the record, sat-rad images are superimpositions of radar imagery onto satellite images. Before using or interpreting this type of image, make sure that you're aware of a few key things. First, the satellite and radar data come from two completely different sources, even though that might not be obvious from the "look" of the image. As you know, WSR-88D radars are located on the ground (not aboard geostationary satellites), which has some major implications for data coverage.

An example of a sat-rad image, which is radar imagery superimposed onto an infrared satellite image. The US is shown here.

The 1651Z infrared satellite image and the 1645Z radar mosaic on May 8, 2024.
Credit: Plymouth State Weather Center

Recall the range of the national array (opens in a new window) of WSR-88D radars? It does not extend very far out into the oceans nor very far north into Canada nor very far south into Mexico. Thus, to a novice user, sat-rad images can give the impression that some clouds are not producing precipitation when they really are. For example, note the area of clouds an precipitation over New England. According to this radar mosaic, the radar echoes ended very close to the New England Coast. Was it raining farther offshore? We can't tell from this image alone because anything farther away was beyond the range of U.S. radars, but this close-up sat-rad loop of the Northeast (opens in a new window) shows radar echoes suddenly disapearing offshore at seemingly circular boundaries in some cases -- a clear sign that the radar echoes weren't telling the full story because of the limited range of land-based radar.

These images can really be misleading to someone who's not fully aware of what they show, so make sure to use them with care!

mjg8