Why do forecasters try to predict




















Each day the NWS takes in , observations from surface stations, 2, observations from ships, 18, from weather buoys, , from aircraft, about , from balloons, and million from satellites.

Other data, in countless bytes, arrive from instrument networks abroad. Yet all this isn't enough. The computer models that are the mainstay of current forecasting require even more data, in tidier form: readings from points on a uniform grid extending around the globe and up into the atmosphere, updated every hour or, better, every minute. That's unattainable in the real world. Satellites have trouble seeing through thick clouds and can't map winds in detail. Weather stations, balloons, aircraft, and ships aren't evenly spaced around the globe, and in many areas—vast swaths of poorer continents such as Africa—ground readings are sparse.

To fill out those observations and create a perfect starting point, meteorologists take their best recent picture of the atmosphere and project it forward in time. The result is a "forecast" of the present, which helps fill the data gaps, completing a snapshot of the current weather at every point on the imaginary global grid. It's generated with the same computer tools that allow meteorologists to look into the future—an approach called numerical modeling.

The time machine is a computer model of the atmosphere, built not from air and water vapor but from data and equations. The equations describe the key processes that govern weather, such as airflow, evaporation, Earth's rotation, and the release of heat as water condenses or freezes.

When meteorologists plug in data on atmospheric conditions, then run the equations, the model predicts how the atmosphere will evolve. It lets forecasters ask: If this is what the atmosphere is doing now, what will it be doing in one minute? And then again one minute after that? At each step, the model computes weather conditions at all points on that imaginary global grid.

The process lets meteorologists generate a full picture of current conditions, then carry it forward in time to create a forecast.

Some models push as far as 16 days into the future, though by that point the accuracy is so diluted that about all they can say is whether the temperature will be above or below the normal monthly average. Even baby-stepping into the future takes bruteforce computing.

But early computers couldn't run a model fast enough for useful predictions. NCEP's computer center in Gaithersburg, Maryland, now crunches numbers in one of the most powerful weather-forecasting engines in the world, a supercomputer called Blue. A backup, housed elsewhere, is called White, and researchers refine their models on a third machine called Red. Resembling a warehouse filled with high-tech filing cabinets, the new machine isn't running at full speed yet.

But by it will handle 8. Yet even the most sophisticated computer models drastically simplify the real atmosphere. Most track conditions at points tens of miles apart, even though actual weather can vary widely within only a couple of miles—the size of a thunderstorm.

The models also have biases: Some do better with hurricanes, while others are better at predicting winter weather, such as ice storms. Forecasters try to compensate by consulting different models, like patients getting second opinions. All that means extra number crunching. Something called the butterfly effect adds to the burden. In Ed Lorenz, a meteorologist at MIT, used the atmosphere to illustrate chaos theory—the idea that tiny fluctuations can, over time, have outsize effects.

He suggested that the gentlest breeze from a butterfly closing its wings on one side of the planet could cause a storm on the other. It's an exaggeration, with a measure of truth: Factors so small that they get lost—in measurement gaps or errors, or in the models' shortcuts—can make a major difference in the weather.

A small wind shift, for example, might send a storm veering miles from its predicted course. In winter "a difference of a fraction of a degree can make all the difference in the world as to whether you get all rain, or all snow, or freezing rain, or sleet," Hoke says. To address the butterfly effect, forecasters rely on a strategy called ensemble forecasting. Starting with one basic set of initial conditions, they run multiple forecasts—as many as 50 at the European Centre for Medium-Range Weather Forecasts, the world leader in ensemble forecasting.

Each begins with a slightly different "perturbation"—a change in wind speed, a degree in temperature, a percentage point in humidity. The forecast becomes statistical: In, say, 43 of the 50 computer runs snow develops, while in seven it rains. That's why so many forecasts use words such as "possible" or "likely," and speak of the percentage probability of precipitation. Computers don't have the last word. After the models have their say, their output is converted to user-friendly graphics and, in the U.

There flesh-and-blood meteorologists second-guess the machines. One fall day Bruce Terry was the lead forecaster on duty, sitting at a workstation flanked by computer screens.

The windows were shaded to keep out glare, hiding the weather outside. All the action was on the screens. On one a radar readout showed a blue-green smear curving up from the southern plains states toward the Ohio Valley; another displayed a satellite view of the same region, veiled in gray cloud. On that particular day Terry's job was to decide where the rain would fall, and how much. Forecasting precipitation, he said, is one of his greatest challenges. But just 19 months later, at the second Grand Challenge, five robots completed the course.

Around the same time I noticed a sudden new robot minicraze popping up that many people dismissed as just another passing fad. What was odd was that my friends with Roombas were as wildly enthusiastic about these machines as they had been about their original K Macs—and being engineers, they had never before shown any interest in owning, much less been excited by, a vacuum cleaner. Alone, this is just a curious story, but considered with the Grand Challenge success, it is another compelling indicator that a robotics inflection point lies in the not-too-distant future.

One indicator: Roomba owners today can even buy costumes for their robots! One of the biggest mistakes a forecaster—or a decision maker—can make is to overrely on one piece of seemingly strong information because it happens to reinforce the conclusion he or she has already reached. This lesson was tragically underscored when nine U. The bearing placed his ship, the Delphy , north of its dead reckoning position.

Convinced that his dead reckoning was accurate, the commander reinterpreted the bearing data in a way that confirmed his erroneous position and ordered a sharp course change towards the rapidly approaching coast. Nine ships followed the disastrous course. Meanwhile, the deck officers on the Kennedy , the 11th boat in the formation, had concluded from their dead reckoning that they in fact were farther north and closer to shore than the position given by the Delphy.

The skipper was skeptical, but the doubt the deck officers raised was sufficient for him to hedge his bets; an hour before the fateful turn he ordered a course change that placed his ship several hundred yards to the west of the ships in front of them, allowing the Kennedy and the three trailing destroyers to avert disaster.

He hedged their bets and, therefore, saved the ship. In forecasting, as in navigation, lots of interlocking weak information is vastly more trustworthy than a point or two of strong information. The problem is that traditional research habits are based on collecting strong information.

And once researchers have gone through the long process of developing a beautiful hypothesis, they have a tendency to ignore any evidence that contradicts their conclusion.

This inevitable resistance to contradictory information is responsible in no small part for the nonlinear process of paradigm shifts identified by Thomas Kuhn in his classic The Structure of Scientific Revolutions.

Once a theory gains wide acceptance, there follows a long stable period in which the theory remains accepted wisdom. All the while, however, contradictory evidence is quietly building that eventually results in a sudden shift. Good forecasting is the reverse: It is a process of strong opinions, weakly held. If you must forecast, then forecast often—and be the first one to prove yourself wrong. The way to do this is to form a forecast as quickly as possible and then set out to discredit it with new data.

Your next step is to try to find out why this might not happen. By formulating a sequence of failed forecasts as rapidly as possible, you can steadily refine the cone of uncertainty to a point where you can comfortably base a strategic response on the forecast contained within its boundaries.

Having strong opinions gives you the capacity to reach conclusions quickly, but holding them weakly allows you to discard them the moment you encounter conflicting evidence.

Marshall McLuhan once observed that too often people steer their way into the future while staring into the rearview mirror because the past is so much more comforting than the present. McLuhan was right, but used properly, our historical rearview mirror is an extraordinarily powerful forecasting tool. Consider the uncertainty generated by the post-bubble swirl of the Web, as incumbents like Google and Yahoo, emergent players, and declining traditional TV and print media players jockey for position.

It all seems to defy categorization, much less prediction, until one looks back five decades to the emergence in the early s of TV and the subsequent mass-media order it helped catalyze. The cutting-edge players of the information revolution, from Microsoft to Google, are pedaling every bit as hard. The problem with history is that our love of certainty and continuity often causes us to draw the wrong conclusions.

The recent past is rarely a reliable indicator of the future—if it were, one could successfully predict the next 12 months of the Dow or Nasdaq by laying a ruler along the past 12 months and extending the line forward.

You must look for the turns, not the straightaways, and thus you must peer far enough into the past to identify patterns. One must look for the turns, not the straightaways, and thus one must peer far enough into the past to identify patterns. So when you look back for parallels, always look back at least twice as far as you are looking forward.

Search for similar patterns, keeping in mind that history—especially recent history—rarely repeats itself directly. The temptation is to use history as the old analogy goes the way a drunk uses a lamppost, for support rather than illumination.

Jerry Levin, for instance, sold Time Warner to AOL in the mistaken belief that he could use mergers and acquisitions to shoulder his company into digital media the way he did so successfully with cable and movies. Another case in point: A dark joke at the Pentagon is that the U.

It is a peculiar human quality that we are at once fearful of—and fascinated by—change. Even in periods of dramatic, rapid transformation, there are vastly more elements that do not change than new things that emerge. Consider again that whirling vortex of the s, the dot-com bubble. Plenty new was happening, but underlying the revolution were deep, unchanging consumer desires and ultimately, to the sorrow of many a start-up, unchanging laws of economics. By focusing on the novelties, many missed the fact that consumers were using their new broadband links to buy very traditional items like books and engage in old human activities like gossip, entertainment, and pornography.

And though the future-lookers pronounced it to be a time when the old rules no longer applied, the old economic imperatives applied with a vengeance and the dot-com bubble burst just like every other bubble before it. Anyone who had taken the time to examine the history of economic bubbles would have seen it coming.

Against this backdrop, it is important to note that there are moments when forecasting is comparatively easy—and other moments when it is impossible. So instead of the forecast being "either this happens, or not" we have many different results if any one or all of those things are estimated incorrectly.

This results in a "distribution" of results. The figure above shows the kinds of curves that are fit to actual verifications. The distributions that fit the kind of forecasts we are making in the forecast contest sort of are in the "narrow" to "broad" uncertainty, but closer to the "narrow" uncertainty.

The further one goes out from present time, the more the distribution gets flattened to broad uncertainty, until the forecast verification distribution shows no skill at all. At that point, climatology does better than the forecaster does in estimating conditions. One can use the concept shown by the curves above to assess the probability of a given result.

This brings us to yet another type of probability, called subjective probability. It can be defined in a variety of ways, but the sort of definition that makes most sense in the context of weather forecasting is that the subjective probability of a particular weather event is associated with the forecaster's uncertainty that the event will occur.

This subjective probability is just as legitimate as a probability derived from some other process, like the geometric- or frequency-derived probabilities just described. Obviously, two different forecasters might arrive at quite different subjective probabilities. Some might worry about whether their subjectively derived probabilities are right or wrong If the meteorologists disagree with the model, then they must determine a different outlook for their forecast.

Image courtesy of WrightWeather. Monitoring the data from all of these tools allows meteorologists to track changes in the weather through time. Based on what you observed in the past, what do you think you will be doing in the future, specifically on October 31st?

In other words, Halloween may occur on October 31st every year, but you may not necessarily wear the same costume or choose the same route to trick-or-treat. A snow storm may set up a similar pattern to one in the past, but produce a different amount of snow in a different part of the state.

A meteorologist must monitor the current conditions during a weather event, and use their knowledge of weather similarities and differences to discern what is going to happen. That was an excellent question, and I hope my answer inspired you to study the weather, too!

Predicting the weather is certainly a tricky task, and all meteorologists strive to do the best job they can. In the meantime, happy storm spotting! Meteorologist Steve Nelson explains the different parameters that meteorologists look for when predicting winter weather.



0コメント

  • 1000 / 1000