People and Community Science and Technology

What spaghetti models do and don’t tell us about a hurricane

As the 2020 Atlantic hurricane season heads into its peak months, Brian McNoldy, senior research associate at the Rosenstiel School of Marine and Atmospheric Science, answers questions about hurricane spaghetti models.
Spaghetti hurricane models
Spaghetti models is the nickname coined for computer models showing where a tropical cyclone may go. Photo: City of St. Petersburg

Already a record-setter, the 2020 Atlantic hurricane season is poised to make even more history in the months ahead, as warmer-than-average sea surface temperatures in the tropical Atlantic Ocean and Caribbean, reduced vertical wind shear, and other conditions are likely to make the season an “extremely active” one, according to the National Oceanic and Atmospheric Administration. 

The agency’s updated outlook, released Thursday, calls for 19 to 25 named storms—of which seven to 11 will become hurricanes, including three to six major hurricanes (winds of 111 miles per hour or greater). The update covers the entire six-month hurricane season, which ends Nov. 30, and includes the nine named storms to date.  

“The Accumulated Cyclone Energy in the Atlantic is now 266 percent above average through Aug. 5—this is what it normally would be on Aug. 28,” stated Brian McNoldy, senior research associate and tropical cyclone expert at the University of Miami Rosenstiel School of Marine and Atmospheric Science, on his Twitter page, referring to the index that measures the combined intensity and duration of all named storms during the season.

Brian McNoldy
McNoldy

Weaker tropical Atlantic trade winds, an enhanced West African monsoon, and the possibility of La Niña—the colder counterpart of the El Niño-Southern Oscillation climate pattern—are other factors driving the likelihood of an amped up hurricane season. 

As we head into the peak months for hurricane development, spaghetti models, the nickname coined for computer models showing where a tropical cyclone may go, are sure to get a workout. When clustered and displayed on a map, the model tracks resemble colorful strands of spaghetti. 

As storms make their way across the Atlantic or Gulf of Mexico, meteorologists often display these models on local television weather forecasts and in online news and weather sites. But just how effective are these models, and what goes into creating them? McNoldy answers these questions and more. 

Who makes the models, and what type of data are used to create them?

There is a lot of preliminary information to explain regarding this topic. In order to understand the basics of a spaghetti model, one needs to know the difference between deterministic and ensemble models, global and regional models, dynamical and statistical models (and hybrids), etc. Then each of their cryptic identifiers must be associated with them when you see them on a map. Here’s a really brief overview

There are agencies around the world that run the models we look at, and in the end, if there is an active tropical cyclone somewhere, each run from each model is squashed down to a little text file that includes that storm’s forecast intensity, central pressure, and location out to at least five days. It is this uniform file that allows everyone to produce the same maps. Though some places show/omit different models, they’re all in there for the taking. 

The dynamical models utilize millions and millions of observations of the atmosphere and ocean every single run through a process called data assimilation. The data sources include weather satellites, aircraft, radars, weather balloons, buoys, and the list goes on and on. The majority of data come from weather satellites. The very important job of data assimilation is to combine, quality-control, and optimize all of those observations taken from around the world over the course of a few hours to arrive at the most representative “initial state” of the atmosphere to give to the forecast model. This is not a trivial step, and no two data assimilation schemes will arrive at the exact same initial state, even if they use the same set of observations. 

Are supercomputers used? 

Very much so. Weather forecast models are extremely complex and computationally intensive. Even on some of the fastest supercomputers, each model run takes several hours.  It must be configured to run fast enough so the output is not irrelevant by the time it’s finished (what good does a 12-hour forecast do if it takes 12 hours to see was it is?)! It is this time crunch that limits things such as grid resolution and the number of ensemble members. However, some of the “models” that are run and shown on spaghetti maps are extremely simple and run in seconds to minutes. 

How far out (number of days) do the models typically go? 

That depends on the type of model. The global models are run out in time for two to three weeks. Regional hurricane models, as of now, are typically run out for five days. One could run a model out in time for months. But since the atmosphere is a chaotic system by nature, non-linear and sub-grid-scale effects will eventually dominate. It is fundamentally impossible to accurately predict the future of a chaotic system beyond some time. That time isn’t fixed or exact, but it is referred to as the “limit of predictability.” The atmosphere can be less predictable than average at times, while at other times the length of skillful forecasts is longer. And in general, the larger the feature of interest (such as continental-scale troughs and ridges), the longer out in time it can be skillfully predicted.  This discussion does not consider another class of models, which are capable of ‘‘sub-seasonal’’ forecasts that have skill for weeks to months. Beyond that, another class of models can predict climate for years and decades into the future. 

There seem to be a lot of these models, and while the lines have different tracks, they often seem to be clustered. Is it helpful for forecasters that so many models exist? 

There are quite a few of them. But they should not all be viewed as equals. They definitely are not. In order to correctly interpret these maps, you absolutely must know what model each of the lines represents, the type of model it is, and if it has known biases. Most people do not and should not need to know all of that, so these maps should be ignored. It is unfortunate that they are shown and shared so widely because it leads to unnecessary confusion—the tracks are all over the place. The forecasters at the National Hurricane Center have the task of distilling the model guidance down to what they consider to be the best forecast at the time, and it is the products available from NHC that the general public should consume. As far as the last part of the question goes, yes, having so many is helpful to forecasters. In fact, this is precisely why ensembles are run and why modeling centers try to run their ensembles with as many members as computationally possible. One can create a multimodel ensemble (which is essentially what you think of when you think of a spaghetti map), but global models also produce their own ensemble runs. The European model runs theirs with 51 members, for example. The reason for this is to account for model and observation errors and uncertainties. Neither is perfect, even at the beginning. So, each ensemble member starts from an ever-so-slightly different initial state and is then free to evolve in the model. With a large number of realistic scenarios, one can derive probabilistic information (60 percent chance of yes, 40 percent chance of no), which is far more valuable than deterministic information (yes or no). 

We hear a lot about the American and European models. Which is more reliable? 

These are two of the global models that are run. The American model refers to the GFS (Global Forecast System) run by NWS; the European model refers to the IFS (Integrated Forecasting System) run by ECMWF. Overall, the IFS is more skillful, but not always. Each situation needs to be handled uniquely, because one model is not always more accurate than the other.  

What do spaghetti models not tell us?

They do not tell us what will happen in the future; they offer scenarios or possibilities. Some of them are much less likely than others, but sometimes even a few skillful models do not agree with each other—which is a sign that there are subtle features in the atmosphere that can lead to very different outcomes. When this happens, forecasters have much less confidence in a forecast than normal. At the time, we of course do not know which one is right, if any. It is really important to never focus on a single model run or even a single model, but to watch them carefully over time for biases and trends.


Top