You’re out of free articles.
Log in
To continue reading, log in to your account.
Create a Free Account
To unlock more free articles, please create a free account.
Sign In or Create an Account.
By continuing, you agree to the Terms of Service and acknowledge our Privacy Policy
Welcome to Heatmap
Thank you for registering with Heatmap. Climate change is one of the greatest challenges of our lives, a force reshaping our economy, our politics, and our culture. We hope to be your trusted, friendly, and insightful guide to that transformation. Please enjoy your free articles. You can check your profile here .
subscribe to get Unlimited access
Offer for a Heatmap News Unlimited Access subscription; please note that your subscription will renew automatically unless you cancel prior to renewal. Cancellation takes effect at the end of your current billing period. We will let you know in advance of any price changes. Taxes may apply. Offer terms are subject to change.
Subscribe to get unlimited Access
Hey, you are out of free articles but you are only a few clicks away from full access. Subscribe below and take advantage of our introductory offer.
subscribe to get Unlimited access
Offer for a Heatmap News Unlimited Access subscription; please note that your subscription will renew automatically unless you cancel prior to renewal. Cancellation takes effect at the end of your current billing period. We will let you know in advance of any price changes. Taxes may apply. Offer terms are subject to change.
Create Your Account
Please Enter Your Password
Forgot your password?
Please enter the email address you use for your account so we can send you a link to reset your password:
A conversation with the most interesting man on the Federal Energy Regulatory Commission.
It’s not every day that a top regulator calls into question the last few decades of policy in the area they help oversee. But that’s exactly what Mark Christie, a commissioner on the Federal Energy Regulatory Commission, the interstate power regulator, did earlier this year.
In a paper enticingly titled “It’s Time To Reconsider Single-Clearing Price Mechanisms in U.S. Energy Markets,” Christie gave a history of deregulation in the electricity markets and suggested it may have been a mistake.
While criticisms of deregulation are by no means new, that they were coming from a FERC commissioner was noteworthy — a Republican no less. While there is not yet a full-scale effort to reverse deregulation in the electricity markets, which has been going on since the 1990s, there is a rising tide of skepticism of how electricity markets do — and don’t — reward reliability, let alone the effect they have on consumer prices.
Christie’s criticisms have a conservative bent, as you’d expect from someone who was nominated by former President Donald Trump to the bipartisan commission. He is very concerned about existing generation going offline and has called activist drives against natural gas pipelines and other transportation infrastructure for the fossil-fuel-emitting power sources a “national campaign of legal warfare…[that] has prevented the construction of vitally needed natural gas transportation infrastructure.”
Since renewables have become, at times, among the world’s cheapest sources of energy and thus quite competitive in deregulated markets with fossil fuels (especially when subsidized), this kind of skepticism is a growing issue in the Republican Party, which has deep ties to oil and gas companies. The Texas state legislature, for instance, responded to Winter Storm Uri, which almost destroyed Texas’ electricity grid in 2021, with its own version of central planning: billions in low cost loans for the construction of new gas-fired power plants. Former Texas Governor Rick Perry, as secretary of energy in the Trump administration, even proposed to FERC a plan to explicitly subsidize coal and nuclear plants, citing reliability concerns. (FERC rejected it.) Some regions that didn’t embrace deregulation, like the Southeast and Southwest, also have some of the most carbon-intensive grids.
But Christie is not so much a critic of renewable resources like wind and solar, per se, as he is very focused on the benefits to the grid of ample “dispatchable” resources, i.e. power sources that can power up and down on demand.
This doesn’t have to mean uncritical acceptance of existing fossil fuel infrastructure. The idea that markets don’t reward reliability enough can help explain the poor winterization for fossil fuel generation that was so disastrous during Winter Storm Uri. And in California, the recognition that renewables alone can’t power the grid 24 hours a day has led to a massive investment in energy storage, which can help approximate the on-demand nature of natural gas or coal without the carbon pollution.
But Christie is primarily interested in the question of just how the planning is done for a system that links together electric generation and consumers. He criticized the deregulated system in much of the country where power is generated by companies separate from the utilities that ultimately sell and distribute that power to customers and where states have less of a role in overall planning, despite ultimately approving electricity rates.
Instead, these markets for power are mediated through a system where utilities pay independent generators a single price for their power at a given time that is arrived at through bidding, often in the context of sprawling multi-state regional transmission organizations like PJM Interconnection, which covers a large swath of the Midwest and Mid-Atlantic region, or the New England Independent System Operator. He says this set-up doesn’t do enough to incentivize dispatchable power, which only comes online when demand spikes, thus making the system overall less reliable, while still showing little evidence that costs have gone down for consumers.
Every year, grid operators and their regulators — including Christie — warn of reliability issues. What Christie argues is that these reliability issues may be endemic to the deregulated system.
Here is where there could be common ground between advocates for an energy transition and conservative deregulation skeptics like Christie. While the combination of deregulation and subsidies has been great for getting solar and wind from zero to around 13 percent of the nation’s utility-scale electricity generation, any truly decarbonized grid will likely require intensive government supervision and planning. Ultimately, political authorities who are guiding the grid to be less carbon-intensive will be responsible for keeping the lights on no matter how cold, warm, sunny, or windy it happens to be. And that may not be something today’s electricity “markets” are up for.
I spoke with Christie in late June about how FERC gave us the electricity market we have today, why states might be better managers than markets, and what he’s worried about this summer. Our conversation has been edited for length and clarity.
What happened to our energy markets in the 1990s and 2000s where you think things started to go wrong?
In the late ‘90s, we had this big push called deregulation. And as I pointed out in the article, it really wasn’t “deregulation” in the sense that in the ‘70s, you know, the trucking and airlines and railroads were deregulated where you remove government price regulation and you let the market set the prices. That’s not what happened. It really was just a change of the price-setting construct and the regulatory construct.
It took what had been the most common form of regulation of utilities, where utilities are considered to be natural monopolies, and said we’re going to restructure these utilities and we’re going to let the generation part compete in these regional markets.
And, you know, from an economic standpoint, okay, so far so good. But there’s been a lot of questioning as to whether there’s really true competition. Many parts of the country also just didn’t do it.
I think there’s a serious question whether that’s benefiting consumers more than the cost of service model where state regulators set the prices.
So if I’m an electricity consumer in one of the markets that’s more or less deregulated, how might reliability become an issue in my own home?
First of all, when you’re in one of these areas that are deregulated, essentially you’re paying the gas price. If it goes up, that’s what you’re going to pay. If it goes down, it looks really good.
But from the reliability standpoint, the question is whether these markets are procuring enough resources to make sure you have the power to keep your lights on 24/7. That is the big question to a consumer in a so-called deregulated state: Are these markets, which are now the main vehicle for buying generation resources, are they getting enough generation resources to make sure that your lights stay on, your heat stays on, and your air conditioning stays on?
Do you think there’s evidence that these deregulated markets are doing a worse job at that kind of procurement?
Well, let’s take, for example, PJM, which came out with an announcement in February that said they were going to lose in the next five years over 40 gigawatts. A gig is 1,000 megawatts, so that’s a lot of power, that’s a lot of generating resources. And the independent market monitor actually has told me it is closer to 50 gigawatts. So all these units are going to retire and they’re going to retire largely for economic reasons. They’re not getting sufficient compensation to stay open.
The essence of restructuring was that generating units are going to have to make their money in the market. They’re not going to get funding through what's called the “rate base,” which is the regulated, traditional cost-of-service model. They have to get it in the markets and theoretically, that sounds good.
But in reality, if they can’t get enough money to pay their cost, they’re going to retire and then you don’t have those resources. Particularly in the RTOs [regional transmission organizations, i.e. the multi-state electricity markets], you’re seeing these markets result in premature retirements of generating resources. And so, now, why is that? It’s more of a problem in the RTOS than non-RTOS because in the non-RTOS, they procure resources under the supervision of a state regulator through what’s called an integrated resource plan or IRP.
The reason I think the advantage and reliability is with the non-RTOS is that those utilities have to prove to a state regulator that their resource plan makes sense, that they’re planning to buy generating resources. Whether they’re buying wind or solar or gas, whatever, they have to go to a state regulator and say, “Here’s our plan” and then seek approval from that regulator. And if they’re shutting down units, the state regulator can say, “Wait a minute, you’re shutting down units that a few years ago you told us were needed for reliability, and now you’re telling us you want to shut them down.” So the state regulator can actually say , “No, you’re not going to shut that unit down. You’re going to keep running it.”
That’s why I think you have more accountability in the non-RTOS because the state regulators can tell the utility, “you need more resources, go build it or buy it,” or “you already have resources, you’re not going to shut them down, we’re not going to let you.”
You don’t have that in an RTO. In an RTO, it’s all done through the market. The market decides, to the extent it has a mind. You know, it’s all the result of market operations. It’s not anybody saying whether it’s a good idea or not for a certain unit to shut down.
I find it interesting that a lot of the criticism of the deregulated system — and a lot of places that are not deregulated — come from more conservative states that would generally not think of themselves as having this kind of strong state role in economic policy. What’s different about electricity? Why do you think the politics of this line up differently than it would on other issues?
I don’t know. That’s an interesting question. I haven’t even thought about it in those terms.
I think it goes back to when deregulation took place in the mid-to-late ‘90s. Other than Texas, which went all the way, the states that probably went farthest on it were in the Northeast. Part of the reason why is because they already had very high consumer prices. I think deregulation was definitely sold as a way to reduce prices to consumers. It hasn’t worked out that way.
Whereas you look at the Southeast, which never went in for deregulation. The Southeastern states, which are still non-RTO states, had relatively very low rates, so they didn’t see a problem to be fixed.
The other big trend since the 1990s and 2000s is the explosive growth of renewables, especially wind and solar. Is there something about deregulated electricity markets, the RTO system, that makes those types of resources economically more favorable than they would be under a different system?
Well, if you’re getting a very high subsidy, like wind and solar are getting, it means you can bid into the energy markets effectively at zero. So if you can bid in at zero offering, you’re virtually guaranteed to be a winner. In a non-RTO state, a state that's doing it through an integrated resource plan, the state regulator reviews the plan. That's why I think an IRP approach is better actually for implementing wind and solar because you can implement and deploy wind and solar as part of an integrated plan that includes enough balancing resources to make sure you keep the lights on.
To me an Integrated Resource Plan is a holistic process, where you can look at all the resources at your disposal: wind, solar, gas, as well as the demand side. And you can balance them all in a way that you think, “Okay, this balance is appropriate for us for the next three years, or four years, or five years.” Because you’re typically doing an IRP every three to five years anyway. And so I think it’s a good way to make sure you balance these resources.
In a market there’s no balancing. In a market it’s just winners and losers. And so wind and solar are almost always going to win because they have such massive subsidies that they’re going to get to offer in at a bid price of zero. The problem with that is they’re not going to get paid zero. They’re going to get paid the highest price [that all electricity suppliers get]. So they offer in at zero, but they get paid the highest price, which is going to be a gas price. It’s probably going to be the last gas unit to clear, that’s usually the one that’s the highest price unit. And yet because of the single clearing price mechanism, everybody gets that price. So you can offer it at zero to guarantee you clear, but then you’re going to get the highest price, usually a gas combustion turbine peaker.
Do you think we would see as much wind and solar on the grid if it weren’t for the fact that a lot of the resources are benefiting from the pricing mechanism you describe?
I don’t think you can draw that conclusion because there are non-RTO states that have what’s called a mandatory RPS, mandatory renewable portfolio standard. And so you can get there through a mandatory RPS and a cost to service model just as you can end up in a market. And actually, again, I think you can get there in a more balanced way to make sure that the reliability is not being threatened in the meantime.
To get back to what we’re talking about in the beginning, my understanding is that FERC, where you are now, played a large role in encouraging deregulation in the formation of RTOs. Is this something that your staff or other commissioners disagree with you about? How do you see the role you’re playing, where you’re doing public advocacy and reshaping this conversation around deregulation?
First of all, we always have to give the standard disclaimer, you never talk about a pending case. But FERC was really the driving force behind a lot of this deregulation. So obviously, they decided that that’s what they wanted to push, and they did. And so I think it’s appropriate as a FERC regulator to raise questions. I think raising questions about the status quo is an important thing that we do and should do. Ultimately, you advocate for what you think it ought to be and if the votes come eventually, it might take several years, but it’s important.
One of the things I try to do is, I put the consumer at the center of everything I do. It is absolutely my priority. And I think that it should be every regulator’s priority, particularly in the electric area because most consumers in America — in fact, almost all consumers in America — are captive customers. By captive. I mean, they don’t get to choose their electric supplier.
Like, where do you live, Matthew?
I live in New York City.
You don’t get to choose, right? You’re getting electricity from ConEd. And you don’t have any choice. So you’re a captive customer. And most consumers in America are captive customers. We tried this retail choice in a few states that didn’t work. You know, they’re still doing it. I’m not going to say whether it’s working or not, but I know we tried it in Virginia, and it didn’t work at all because of a lot of reasons.
I always put customers first and say, “Look, these customers are captive. We have to protect them. We have to protect the captive customers by making sure they’re not getting overcharged.” So that’s why I care about these issues. And that’s why I wrote this article. I think that customers in a lot of ways in America are not getting treated fairly. They’re getting overcharged and I think they’re not getting what they should be getting. And so I think a big part of it is some of this stuff that FERC's been pushing for the last 25 years.
Our time is running out. So I will leave with a question that is topical: It’s already been quite hot in Texas, but outside of Texas and in FERC-land, where are you concerned about reliability issues this summer?
Well, I’m concerned about everywhere. It’s not a flippant remark. I read very closely the reliability reports that we get from NERC and we have reliability challenges in many, many places. It’s not just in the RTOs. I think we have reliability challenges in the South. Fortunately, the West this year, which has been a problem the last couple of years, is actually looking pretty good because all the rain last winter — even flooding — really was great for hydropower.
I’m from California, and I think it’s the first time in my adult life that I remember stories about dams being 100 percent, if not more than 100 percent, full.
The rains and snowfall were so needed. It’s filled up reservoirs that have been really dry for years. And from an electrical standpoint, it’s been really good for hydro. So they’re looking at really good hydro availability this summer in ways they haven't been for the last several years. So the West actually, because of all the rain and the greater available of hydro, I think is in fairly good shape.
There’s a problem in California with the duck curve, the problem is still there. If you have such a high solar content, when the sun goes down, obviously the solar stops generating and so what do you do you know for the next four to five hours? Because the air conditioners are still running, it’s still hot, but that solar production has just dropped off the table. So they’ve been patching with some battery storage and some gas backup.
But I’m worried about everywhere. I watch very closely the reports that come out of the RTOs and you can’t be shutting down dispatchable resources at the rate we’re doing when you’re not replacing them one to one with wind or solar. The arithmetic doesn’t work and it’s going to catch up to us at some point.
Log in
To continue reading, log in to your account.
Create a Free Account
To unlock more free articles, please create a free account.
The Senate told renewables developers they’d have a year to start construction and still claim a tax break. Then came an executive order.
Renewable energy advocates breathed a sigh of relief after a last-minute change to the One Big Beautiful Bill Act stipulated that wind and solar projects would be eligible for tax credits as long as they began construction within the next 12 months.
But the new law left an opening for the Trump administration to cut that window short, and now Trump is moving to do just that. The president signed an executive order on Monday directing the Treasury Department to issue new guidance for the clean electricity tax credits “restricting the use of broad safe harbors unless a substantial portion of a subject facility has been built.”
The broad safe harbors in question have to do with the way the government defines the “beginning of construction,” which, in the realm of federal tax credits, is a term of art. Under the current Treasury guidance, developers must either complete “physical work of a significant nature” on a given project or spend at least 5% of its total cost to prove they have started construction during a given year, and are therefore protected from any subsequent tax law changes.
As my colleague Matthew Zeitlin previously reported, oftentimes something as simple as placing an order for certain pieces of equipment, like transformers or solar trackers, will check the box. Still, companies can’t just buy a bunch of equipment to qualify for the tax credits and then sit on it indefinitely. Their projects must be up and operating within four years, or else they must demonstrate “continuous progress” each year to continue to qualify.
As such, under existing rules and Trump’s new law, wind and solar developers would have 12 months to claim eligibility for the investment or production tax credit, and then at least four years to build the project and connect it to the grid. While a year is a much shorter runway than the open-ended extension to the tax credits granted by the Inflation Reduction Act, it’s a much better deal than the House’s original version of the OBBBA, which would have required projects to start construction within two months and be operating by the end of 2028 to qualify.
Or so it seemed.
The tax credits became a key bargaining chip during the final negotiations on the bill. Senator Lisa Murkowski of Alaska fought to retain the 12-month runway for wind and solar, while members of the House Freedom Caucus sought to kill it. Ultimately, the latter group agreed to vote yes after winning assurances from the president that he would “deal” with the subsidies later.
Last week, as all of this was unfolding, I started to hear rumors that the Treasury guidance regarding “beginning of construction” could be a key tool at the president’s disposal to make good on his promise. Industry groups had urged Congress to codify the existing guidance in the bill, but it was ultimately left out.
When I reached out to David Burton, a partner at Norton Rose Fulbright who specializes in energy tax credits, on Thursday, he was already contemplating Trump’s options to exploit that omission.
Burton told me that Trump’s Treasury department could redefine “beginning of construction” in a number of ways, such as by removing the 5% spending safe harbor or requiring companies to get certain permits in order to demonstrate “significant” physical work. It could also shorten the four-year grace period to bring a project to completion.
But Burton was skeptical that the Treasury Department had the staff or expertise to do the work of rewriting the guidance, let alone that Trump would make this a priority. “Does Treasury really want to spend the next couple of months dealing with this?” he said. “Or would it rather deal with implementing bonus depreciation and other taxpayer-favorable rules in the One Big Beautiful Bill instead of being stuck on this tangent, which will be quite a heavy lift and take some time?”
Just days after signing the bill into law, Trump chose the tangent, directing the Treasury to produce new guidance within 45 days. “It’s going to need every one of those days to come out with thoughtful guidance that can actually be applied by taxpayers,” Burton told me when I called him back on Monday night.
The executive order cites “energy dominance, national security, economic growth, and the fiscal health of the Nation” as reasons to end subsidies for wind and solar. The climate advocacy group Evergreen Action said it would help none of these objectives. “Trump is once again abusing his power in a blatant end-run around Congress — and even his own party,” Lena Moffit, the group’s executive director said in a statement. “He’s directing the government to sabotage the very industries that are lowering utility bills, creating jobs, and securing our energy independence.”
Industry groups were still assessing the implications of the executive order, and the ones I reached out to declined to comment for this story. “Now we’re circling the wagons back up to dig into the details,” one industry representative told me, adding that it was “shocking” that Trump would “seemingly double cross Senate leadership and Thune in particular.”
As everyone waits to see what Treasury officials come up with, developers will be racing to “start construction” as defined by the current rules, Burton said. It would be “quite unusual” if the new guidance were retroactive, he added. Although given Trump’s history, he said, “I guess anything is possible.”
“I believe the tariff on copper — we’re going to make it 50%.”
President Trump announced Tuesday during a cabinet meeting that he plans to impose a hefty tax on U.S. copper imports.
“I believe the tariff on copper — we’re going to make it 50%,” he told reporters.
Copper traders and producers have anticipated tariffs on copper since Trump announced in February that his administration would investigate the national security implications of copper imports, calling the metal an “essential material for national security, economic strength, and industrial resilience.”
Trump has already imposed tariffs for similarly strategically and economically important metals such as steel and aluminum. The process for imposing these tariffs under section 232 of the Trade Expansion Act of 1962 involves a finding by the Secretary of Commerce that the product being tariffed is essential to national security, and thus that the United States should be able to supply it on its own.
Copper has been referred to as the “metal of electrification” because of its centrality to a broad array of electrical technologies, including transmission lines, batteries, and electric motors. Electric vehicles contain around 180 pounds of copper on average. “Copper, scrap copper, and copper’s derivative products play a vital role in defense applications, infrastructure, and emerging technologies, including clean energy, electric vehicles, and advanced electronics,” the White House said in February.
Copper prices had risen around 25% this year through Monday. Prices for copper futures jumped by as much as 17% after the tariff announcement and are currently trading at around $5.50 a pound.
The tariffs, when implemented, could provide renewed impetus to expand copper mining in the United States. But tariffs can happen in a matter of months. A copper mine takes years to open — and that’s if investors decide to put the money toward the project in the first place. Congress took a swipe at the electric vehicle market in the U.S. last week, extinguishing subsidies for both consumers and manufacturers as part of the One Big Beautiful Bill Act. That will undoubtedly shrink domestic demand for EV inputs like copper, which could make investors nervous about sinking years and dollars into new or expanded copper mines.
Even if the Trump administration succeeds in its efforts to accelerate permitting for and construction of new copper mines, the copper will need to be smelted and refined before it can be used, and China dominates the copper smelting and refining industry.
The U.S. produced just over 1.1 million tons of copper in 2023, with 850,000 tons being mined from ore and the balance recycled from scrap, according to United States Geological Survey data. It imported almost 900,000 tons.
With the prospect of tariffs driving up prices for domestically mined ore, the immediate beneficiaries are those who already have mines. Shares in Freeport-McMoRan, which operates seven copper mines in Arizona and New Mexico, were up over 4.5% in afternoon trading Tuesday.
Predicting the location and severity of thunderstorms is at the cutting edge of weather science. Now funding for that science is at risk.
Tropical Storm Barry was, by all measures, a boring storm. “Blink and you missed it,” as a piece in Yale Climate Connections put it after Barry formed, then dissipated over 24 hours in late June, having never sustained wind speeds higher than 45 miles per hour. The tropical storm’s main impact, it seemed at the time, was “heavy rains of three to six inches, which likely caused minor flooding” in Tampico, Mexico, where it made landfall.
But a few days later, U.S. meteorologists started to get concerned. The remnants of Barry had swirled northward, pooling wet Gulf air over southern and central Texas and elevating the atmospheric moisture to reach or exceed record levels for July. “Like a waterlogged sponge perched precariously overhead, all the atmosphere needed was a catalyst to wring out the extreme levels of water vapor,” meteorologist Mike Lowry wrote.
More than 100 people — many of them children — ultimately died as extreme rainfall caused the Guadalupe River to rise 34 feet in 90 minutes. But the tragedy was “not really a failure of meteorology,” UCLA and UC Agriculture and Natural Resources climate scientist Daniel Swain said during a public “Office Hours” review of the disaster on Monday. The National Weather Service in San Antonio and Austin first warned the public of the potential for heavy rain on Sunday, June 29 — five days before the floods crested. The agency followed that with a flood watch warning for the Kerrville area on Thursday, July 3, then issued an additional 21 warnings, culminating just after 1 a.m. on Friday, July 4, with a wireless emergency alert sent to the phones of residents, campers, and RVers along the Guadalupe River.
The NWS alerts were both timely and accurate, and even correctly predicted an expected rainfall rate of 2 to 3 inches per hour. If it were possible to consider the science alone, the official response might have been deemed a success.
Of all the storm systems, convective storms — like thunderstorms, hail, tornadoes, and extreme rainstorms — are some of the most difficult to forecast. “We don’t have very good observations of some of these fine-scale weather extremes,” Swain told me after office hours were over, in reference to severe meteorological events that are often relatively short-lived and occur in small geographic areas. “We only know a tornado occurred, for example, if people report it and the Weather Service meteorologists go out afterward and look to see if there’s a circular, radial damage pattern.” A hurricane, by contrast, spans hundreds of miles and is visible from space.
Global weather models, which predict conditions at a planetary scale, are relatively coarse in their spatial resolution and “did not do the best job with this event,” Swain said during his office hours. “They predicted some rain, locally heavy, but nothing anywhere near what transpired.” (And before you ask — artificial intelligence-powered weather models were among the worst at predicting the Texas floods.)
Over the past decade or so, however, due to the unique convective storm risks in the United States, the National Oceanic and Atmospheric Administration and other meteorological agencies have developed specialized high resolution convection-resolving models to better represent and forecast extreme thunderstorms and rainstorms.
NOAA’s cutting-edge specialized models “got this right,” Swain told me of the Texas storms. “Those were the models that alerted the local weather service and the NOAA Weather Prediction Center of the potential for an extreme rain event. That is why the flash flood watches were issued so early, and why there was so much advanced knowledge.”
Writing for The Eyewall, meteorologist Matt Lanza concurred with Swain’s assessment: “By Thursday morning, the [high resolution] model showed as much as 10 to 13 inches in parts of Texas,” he wrote. “By Thursday evening, that was as much as 20 inches. So the [high resolution] model upped the ante all day.”
Most models initialized at 00Z last night indicated the potential for localized excessive rainfall over portions of south-central Texas that led to the tragic and deadly flash flood early this morning. pic.twitter.com/t3DpCfc7dX
— Jeff Frame (@VORTEXJeff) July 4, 2025
To be any more accurate than they ultimately were on the Texas floods, meteorologists would have needed the ability to predict the precise location and volume of rainfall of an individual thunderstorm cell. Although models can provide a fairly accurate picture of the general area where a storm will form, the best current science still can’t achieve that level of precision more than a few hours in advance of a given event.
Climate change itself is another factor making storm behavior even less predictable. “If it weren’t so hot outside, if it wasn’t so humid, if the atmosphere wasn’t holding all that water, then [the system] would have rained and marched along as the storm drifted,” Claudia Benitez-Nelson, an expert on flooding at the University of South Carolina, told me. Instead, slow and low prevailing winds caused the system to stall, pinning it over the same worst-case-scenario location at the confluence of the Hill Country rivers for hours and challenging the limits of science and forecasting.
Though it’s tempting to blame the Trump administration cuts to the staff and budget of the NWS for the tragedy, the local NWS actually had more forecasters on hand than usual in its local field office ahead of the storm, in anticipation of potential disaster. Any budget cuts to the NWS, while potentially disastrous, would not go into effect until fiscal year 2026.
The proposed 2026 budget for NOAA, however, would zero out the upkeep of the models, as well as shutter the National Severe Storms Laboratory in Norman, Oklahoma, which studies thunderstorms and rainstorms, such as the one in Texas. And due to the proprietary, U.S.-specific nature of the high-resolution models, there is no one coming to our rescue if they’re eliminated or degraded by the cuts.
The impending cuts are alarming to the scientists charged with maintaining and adjusting the models to ensure maximum accuracy, too. Computationally, it’s no small task to keep them running 24 hours a day, every day of the year. A weather model doesn’t simply run on its own indefinitely, but rather requires large data transfers as well as intakes of new conditions from its network of observation stations to remain reliable. Although the NOAA high-resolution models have been in use for about a decade, yearly updates keep the programs on the cutting edge of weather science; without constant tweaks, the models’ accuracy slowly degrades as the atmosphere changes and information and technologies become outdated.
It’s difficult to imagine that the Texas floods could have been more catastrophic, and yet the NOAA models and NWS warnings and alerts undoubtedly saved lives. Still, local Texas authorities have attempted to pass the blame, claiming they weren’t adequately informed of the dangers by forecasters. The picture will become clearer as reporting continues to probe why the flood-prone region did not have warning sirens, why camp counselors did not have their phones to receive overnight NWS alarms, why there were not more flood gauges on the rivers, and what, if anything, local officials could have done to save more people. Still, given what is scientifically possible at this stage of modeling, “This was not a forecast failure relative to scientific or weather prediction best practices. That much is clear,” Swain said.
As the climate warms and extreme rainfall events increase as a result, however, it will become ever more crucial to have access to cutting-edge weather models. “What I want to bring attention to is that this is not a one-off,” Benitez-Nelson, the flood expert at the University of South Carolina, told me. “There’s this temptation to say, ‘Oh, it’s a 100-year storm, it’s a 1,000-year storm.’”
“No,” she went on. “This is a growing pattern.”