Sign In or Create an Account.

By continuing, you agree to the Terms of Service and acknowledge our Privacy Policy

Climate

The Only Weather Models That Nailed the Texas Floods Are on Trump’s Chopping Block

Predicting the location and severity of thunderstorms is at the cutting edge of weather science. Now funding for that science is at risk.

Texas flooding.
Heatmap Illustration/Getty Images

Tropical Storm Barry was, by all measures, a boring storm. “Blink and you missed it,” as a piece in Yale Climate Connections put it after Barry formed, then dissipated over 24 hours in late June, having never sustained wind speeds higher than 45 miles per hour. The tropical storm’s main impact, it seemed at the time, was “heavy rains of three to six inches, which likely caused minor flooding” in Tampico, Mexico, where it made landfall.

But a few days later, U.S. meteorologists started to get concerned. The remnants of Barry had swirled northward, pooling wet Gulf air over southern and central Texas and elevating the atmospheric moisture to reach or exceed record levels for July. “Like a waterlogged sponge perched precariously overhead, all the atmosphere needed was a catalyst to wring out the extreme levels of water vapor,” meteorologist Mike Lowry wrote.

More than 100 people — many of them children — ultimately died as extreme rainfall caused the Guadalupe River to rise 34 feet in 90 minutes. But the tragedy was “not really a failure of meteorology,” UCLA and UC Agriculture and Natural Resources climate scientist Daniel Swain said during a public “Office Hours” review of the disaster on Monday. The National Weather Service in San Antonio and Austin first warned the public of the potential for heavy rain on Sunday, June 29 — five days before the floods crested. The agency followed that with a flood watch warning for the Kerrville area on Thursday, July 3, then issued an additional 21 warnings, culminating just after 1 a.m. on Friday, July 4, with a wireless emergency alert sent to the phones of residents, campers, and RVers along the Guadalupe River.

The NWS alerts were both timely and accurate, and even correctly predicted an expected rainfall rate of 2 to 3 inches per hour. If it were possible to consider the science alone, the official response might have been deemed a success.

Of all the storm systems, convective storms — like thunderstorms, hail, tornadoes, and extreme rainstorms — are some of the most difficult to forecast. “We don’t have very good observations of some of these fine-scale weather extremes,” Swain told me after office hours were over, in reference to severe meteorological events that are often relatively short-lived and occur in small geographic areas. “We only know a tornado occurred, for example, if people report it and the Weather Service meteorologists go out afterward and look to see if there’s a circular, radial damage pattern.” A hurricane, by contrast, spans hundreds of miles and is visible from space.

Global weather models, which predict conditions at a planetary scale, are relatively coarse in their spatial resolution and “did not do the best job with this event,” Swain said during his office hours. “They predicted some rain, locally heavy, but nothing anywhere near what transpired.” (And before you ask — artificial intelligence-powered weather models were among the worst at predicting the Texas floods.)

Over the past decade or so, however, due to the unique convective storm risks in the United States, the National Oceanic and Atmospheric Administration and other meteorological agencies have developed specialized high resolution convection-resolving models to better represent and forecast extreme thunderstorms and rainstorms.

NOAA’s cutting-edge specialized models “got this right,” Swain told me of the Texas storms. “Those were the models that alerted the local weather service and the NOAA Weather Prediction Center of the potential for an extreme rain event. That is why the flash flood watches were issued so early, and why there was so much advanced knowledge.”

Writing for The Eyewall, meteorologist Matt Lanza concurred with Swain’s assessment: “By Thursday morning, the [high resolution] model showed as much as 10 to 13 inches in parts of Texas,” he wrote. “By Thursday evening, that was as much as 20 inches. So the [high resolution] model upped the ante all day.”

To be any more accurate than they ultimately were on the Texas floods, meteorologists would have needed the ability to predict the precise location and volume of rainfall of an individual thunderstorm cell. Although models can provide a fairly accurate picture of the general area where a storm will form, the best current science still can’t achieve that level of precision more than a few hours in advance of a given event.

Climate change itself is another factor making storm behavior even less predictable. “If it weren’t so hot outside, if it wasn’t so humid, if the atmosphere wasn’t holding all that water, then [the system] would have rained and marched along as the storm drifted,” Claudia Benitez-Nelson, an expert on flooding at the University of South Carolina, told me. Instead, slow and low prevailing winds caused the system to stall, pinning it over the same worst-case-scenario location at the confluence of the Hill Country rivers for hours and challenging the limits of science and forecasting.

Though it’s tempting to blame the Trump administration cuts to the staff and budget of the NWS for the tragedy, the local NWS actually had more forecasters on hand than usual in its local field office ahead of the storm, in anticipation of potential disaster. Any budget cuts to the NWS, while potentially disastrous, would not go into effect until fiscal year 2026.

The proposed 2026 budget for NOAA, however, would zero out the upkeep of the models, as well as shutter the National Severe Storms Laboratory in Norman, Oklahoma, which studies thunderstorms and rainstorms, such as the one in Texas. And due to the proprietary, U.S.-specific nature of the high-resolution models, there is no one coming to our rescue if they’re eliminated or degraded by the cuts.

The impending cuts are alarming to the scientists charged with maintaining and adjusting the models to ensure maximum accuracy, too. Computationally, it’s no small task to keep them running 24 hours a day, every day of the year. A weather model doesn’t simply run on its own indefinitely, but rather requires large data transfers as well as intakes of new conditions from its network of observation stations to remain reliable. Although the NOAA high-resolution models have been in use for about a decade, yearly updates keep the programs on the cutting edge of weather science; without constant tweaks, the models’ accuracy slowly degrades as the atmosphere changes and information and technologies become outdated.

It’s difficult to imagine that the Texas floods could have been more catastrophic, and yet the NOAA models and NWS warnings and alerts undoubtedly saved lives. Still, local Texas authorities have attempted to pass the blame, claiming they weren’t adequately informed of the dangers by forecasters. The picture will become clearer as reporting continues to probe why the flood-prone region did not have warning sirens, why camp counselors did not have their phones to receive overnight NWS alarms, why there were not more flood gauges on the rivers, and what, if anything, local officials could have done to save more people. Still, given what is scientifically possible at this stage of modeling, “This was not a forecast failure relative to scientific or weather prediction best practices. That much is clear,” Swain said.

As the climate warms and extreme rainfall events increase as a result, however, it will become ever more crucial to have access to cutting-edge weather models. “What I want to bring attention to is that this is not a one-off,” Benitez-Nelson, the flood expert at the University of South Carolina, told me. “There’s this temptation to say, ‘Oh, it’s a 100-year storm, it’s a 1,000-year storm.’”

“No,” she went on. “This is a growing pattern.”

Blue

You’re out of free articles.

Subscribe today to experience Heatmap’s expert analysis 
of climate change, clean energy, and sustainability.
To continue reading
Create a free account or sign in to unlock more free articles.
or
Please enter an email address
By continuing, you agree to the Terms of Service and acknowledge our Privacy Policy
Adaptation

The ‘Buffer’ That Can Protect a Town from Wildfires

Paradise, California, is snatching up high-risk properties to create a defensive perimeter and prevent the town from burning again.

Homes as a wildfire buffer.
Heatmap Illustration/Getty Images

The 2018 Camp Fire was the deadliest wildfire in California’s history, wiping out 90% of the structures in the mountain town of Paradise and killing at least 85 people in a matter of hours. Investigations afterward found that Paradise’s town planners had ignored warnings of the fire risk to its residents and forgone common-sense preparations that would have saved lives. In the years since, the Camp Fire has consequently become a cautionary tale for similar communities in high-risk wildfire areas — places like Chinese Camp, a small historic landmark in the Sierra Nevada foothills that dramatically burned to the ground last week as part of the nearly 14,000-acre TCU September Lightning Complex.

More recently, Paradise has also become a model for how a town can rebuild wisely after a wildfire. At least some of that is due to the work of Dan Efseaff, the director of the Paradise Recreation and Park District, who has launched a program to identify and acquire some of the highest-risk, hardest-to-access properties in the Camp Fire burn scar. Though he has a limited total operating budget of around $5.5 million and relies heavily on the charity of local property owners (he’s currently in the process of applying for a $15 million grant with a $5 million match for the program) Efseaff has nevertheless managed to build the beginning of a defensible buffer of managed parkland around Paradise that could potentially buy the town time in the case of a future wildfire.

Keep reading...Show less
Spotlight

How the Tax Bill Is Empowering Anti-Renewables Activists

A war of attrition is now turning in opponents’ favor.

Massachusetts and solar panels.
Heatmap Illustration/Library of Congress, Getty Images

A solar developer’s defeat in Massachusetts last week reveals just how much stronger project opponents are on the battlefield after the de facto repeal of the Inflation Reduction Act.

Last week, solar developer PureSky pulled five projects under development around the western Massachusetts town of Shutesbury. PureSky’s facilities had been in the works for years and would together represent what the developer has claimed would be one of the state’s largest solar projects thus far. In a statement, the company laid blame on “broader policy and regulatory headwinds,” including the state’s existing renewables incentives not keeping pace with rising costs and “federal policy updates,” which PureSky said were “making it harder to finance projects like those proposed near Shutesbury.”

Keep reading...Show less
Yellow
Hotspots

The Midwest Is Becoming Even Tougher for Solar Projects

And more on the week’s most important conflicts around renewables.

The United States.
Heatmap Illustration/Getty Images

1. Wells County, Indiana – One of the nation’s most at-risk solar projects may now be prompting a full on moratorium.

  • Late last week, this county was teed up to potentially advance a new restrictive solar ordinance that would’ve cut off zoning access for large-scale facilities. That’s obviously bad for developers. But it would’ve still allowed solar facilities up to 50 acres and grandfathered in projects that had previously signed agreements with local officials.
  • However, solar opponents swamped the county Area Planning Commission meeting to decide on the ordinance, turning it into an over four-hour display in which many requested in public comments to outright ban solar projects entirely without a grandfathering clause.
  • It’s clear part of the opposition is inflamed over the EDF Paddlefish Solar project, which we ranked last year as one of the nation’s top imperiled renewables facilities in progress. The project has already resulted in a moratorium in another county, Huntington.
  • Although the Paddlefish project is not unique in its risks, it is what we view as a bellwether for the future of solar development in farming communities, as the Fort Wayne-adjacent county is a picturesque display of many areas across the United States. Pro-renewables advocates have sought to tamp down opposition with tactics such as a direct text messaging campaign, which I previously scooped last week.
  • Yet despite the counter-communications, momentum is heading in the other direction. At the meeting, officials ultimately decided to punt a decision to next month so they could edit their draft ordinance to assuage aggrieved residents.
  • Also worth noting: anyone could see from Heatmap Pro data that this county would be an incredibly difficult fight for a solar developer. Despite a slim majority of local support for renewable energy, the county has a nearly 100% opposition risk rating, due in no small part to its large agricultural workforce and MAGA leanings.

2. Clark County, Ohio – Another Ohio county has significantly restricted renewable energy development, this time with big political implications.

Keep reading...Show less
Yellow