You’re out of free articles.
Log in
To continue reading, log in to your account.
Create a Free Account
To unlock more free articles, please create a free account.
Sign In or Create an Account.
By continuing, you agree to the Terms of Service and acknowledge our Privacy Policy
Welcome to Heatmap
Thank you for registering with Heatmap. Climate change is one of the greatest challenges of our lives, a force reshaping our economy, our politics, and our culture. We hope to be your trusted, friendly, and insightful guide to that transformation. Please enjoy your free articles. You can check your profile here .
subscribe to get Unlimited access
Offer for a Heatmap News Unlimited Access subscription; please note that your subscription will renew automatically unless you cancel prior to renewal. Cancellation takes effect at the end of your current billing period. We will let you know in advance of any price changes. Taxes may apply. Offer terms are subject to change.
Subscribe to get unlimited Access
Hey, you are out of free articles but you are only a few clicks away from full access. Subscribe below and take advantage of our introductory offer.
subscribe to get Unlimited access
Offer for a Heatmap News Unlimited Access subscription; please note that your subscription will renew automatically unless you cancel prior to renewal. Cancellation takes effect at the end of your current billing period. We will let you know in advance of any price changes. Taxes may apply. Offer terms are subject to change.
Create Your Account
Please Enter Your Password
Forgot your password?
Please enter the email address you use for your account so we can send you a link to reset your password:
Inside Climeworks’ big experiment to wrest carbon from the air

In the spring of 2021, the world’s leading authority on energy published a “roadmap” for preventing the most catastrophic climate change scenarios. One of its conclusions was particularly daunting. Getting energy-related emissions down to net zero by 2050, the International Energy Agency said, would require “huge leaps in innovation.”
Existing technologies would be mostly sufficient to carry us down the carbon curve over the next decade. But after that, nearly half of the remaining work would have to come from solutions that, for all intents and purposes, did not exist yet. Some would only require retooling existing industries, like developing electric long-haul trucks and carbon-free steel. But others would have to be built from almost nothing and brought to market in record time.
What will it take to rapidly develop new solutions, especially those that involve costly physical infrastructure and which have essentially no commercial value today?
That’s the challenge facing Climeworks, the Swiss company developing machines to wrest carbon dioxide molecules directly from the air. In September 2021, a few months after the IEA’s landmark report came out, Climeworks switched on its first commercial-scale “direct air capture” facility, a feat of engineering it dubbed “Orca,” in Iceland.
The technology behind Orca is one of the top candidates to clean up the carbon already blanketing the Earth. It could also be used to balance out any stubborn, residual sources of greenhouse gases in the future, such as from agriculture or air travel, providing the “net” in net-zero. If we manage to scale up technologies like Orca to the point where we remove more carbon than we release, we could even begin cooling the planet.
As the largest carbon removal plant operating in the world, Orca is either trivial or one of the most important climate projects built in the last decade, depending on how you look at it. It was designed to capture approximately 4,000 metric tons of carbon from the air per year, which, as one climate scientist, David Ho, put it, is the equivalent of rolling back the clock on just 3 seconds of global emissions. But the learnings gleaned from Orca could surpass any quantitative assessment of its impact. How well do these “direct air capture” machines work in the real world? How much does it really cost to run them? And can they get better?
The company — and its funders — are betting they can. Climeworks has made major deals with banks, insurers, and other companies trying to go green to eventually remove carbon from the atmosphere on their behalf. Last year, the company raised $650 million in equity that will “unlock the next phase of its growth,” scaling the technology “up to multi-million-ton capacity … as carbon removal becomes a trillion-dollar market.” And just last month, the U.S. Department of Energy selected Climeworks, along with another carbon removal company, Heirloom, to receive up to $600 million to build a direct air capture “hub” in Louisiana, with the goal of removing one million tons of carbon annually.
Two years after powering up Orca, Climeworks has yet to reveal how effective the technology has proven to be. But in extensive interviews, top executives painted a picture of innovation in progress.
Chief marketing officer Julie Gosalvez told me that Orca is small and climatically insignificant on purpose. The goal is not to make a dent in climate change — yet — but to maximize learning at minimal cost. “You want to learn when you're small, right?” Gosalvez said. “It’s really de-risking the technology. It’s not like Tesla doing EVs when we have been building cars for 70 years and the margin of learning and risk is much smaller. It’s completely new.”
From the ground, Orca looks sort of like a warehouse or a server farm with a massive air conditioning system out back. The plant consists of eight shipping container-sized boxes arranged in a U-shape around a central building, each one equipped with an array of fans. When the plant is running, which is more or less all the time, the fans suck air into the containers where it makes contact with a porous filter known as a “sorbent” which attracts CO2 molecules.

When the filters become totally saturated with CO2, the vents on the containers snap shut, and the containers are heated to more than 212 degrees Fahrenheit. This releases the CO2, which is then delivered through a pipe to a secondary process called “liquefaction,” where it is compressed into a liquid. Finally, the liquid CO2 is piped into basalt rock formations underground, where it slowly mineralizes into stone. The process requires a little bit of electricity and a lot of heat, all of which comes from a carbon-free source — a geothermal power plant nearby.
A day at Orca begins with the morning huddle. The total number on the team is often in flux, but it typically has a staff of about 15 people, Climeworks’ head of operations Benjamin Keusch told me. Ten work in a virtual control room 1,600 miles away in Zurich, taking turns monitoring the plant on a laptop and managing its operations remotely. The remainder work on site, taking orders from the control room, repairing equipment, and helping to run tests.
During the huddle, the team discusses any maintenance that needs to be done. If there’s an issue, the control room will shut down part of the plant while the on-site workers investigate. So far, they’ve dealt with snow piling up around the plant that had to be shoveled, broken and corroded equipment that had to be replaced, and sediment build-up that had to be removed.

The air is more humid and sulfurous at the site in Iceland than in Switzerland, where Climeworks had built an earlier, smaller-scale model, so the team is also learning how to optimize the technology for different weather. Within all this troubleshooting, there’s additional trade-offs to explore and lessons to learn. If a part keeps breaking, does it make more sense to plan to replace it periodically, or to redesign it? How do supply chain constraints play into that calculus?
The company is also performing tests regularly, said Keusch. For example, the team has tested new component designs at Orca that it now plans to incorporate into Climeworks’ next project from the start. (Last year, the company began construction on “Mammoth,” a new plant that will be nine times larger than Orca, on a neighboring site.) At a summit that Climeworks hosted in June, co-founder Jan Wurzbacher said the company believes that over the next decade, it will be able to make its direct air capture system twice as small and cut its energy consumption in half.
“In innovation lingo, the jargon is we haven’t converged on a dominant design,” Gregory Nemet, a professor at the University of Wisconsin who studies technological development, told me. For example, in the wind industry, turbines with three blades, upwind design, and a horizontal axis, are now standard. “There were lots of other experiments before that convergence happened in the late 1980s,” he said. “So that’s kind of where we are with direct air capture. There’s lots of different ways that are being tried right now, even within a company like Climeworks."
Although Climeworks was willing to tell me about the goings-on at Orca over the last two years, the company declined to share how much carbon it has captured or how much energy, on average, the process has used.
Gosalvez told me that the plant’s performance has improved month after month, and that more detailed information was shared with investors. But she was hesitant to make the data public, concerned that it could be misinterpreted, because tests and maintenance at Orca require the plant to shut down regularly.
“Expectations are not in line with the stage of the technology development we are at. People expect this to be turnkey,” she said. “What does success look like? Is it the absolute numbers, or the learnings and ability to scale?”
Danny Cullenward, a climate economist and consultant who has studied the integrity of various carbon removal methods, did not find the company’s reluctance to share data especially concerning. “For these earliest demonstration facilities, you might expect people to hit roadblocks or to have to shut the plant down for a couple of weeks, or do all sorts of things that are going to make it hard to transparently report the efficiency of your process, the number of tons you’re getting at different times,” he told me.
But he acknowledged that there was an inherent tension to the stance, because ultimately, Climeworks’ business model — and the technology’s effectiveness as a climate solution — depend entirely on the ability to make precise, transparent, carbon accounting claims.
Nemet was also of two minds about it. Carbon removal needs to go from almost nothing today to something like a billion tons of carbon removed per year in just three decades, he said. That’s a pace on the upper end of what’s been observed historically with other technologies, like solar panels. So it’s important to understand whether Climeworks’ tech has any chance of meeting the moment. Especially since the company faces competition from a number of others developing direct air capture technologies, like Heirloom and Occidental Petroleum, that may be able to do it cheaper, or faster.
However, Nemet was also sympathetic to the position the company was in. “It’s relatively incremental how these technologies develop,” he said. “I have heard this criticism that this is not a real technology because we haven’t built it at scale, so we shouldn’t depend on it. Or that one of these plants not doing the removal that it said it would do shows that it doesn’t work and that we therefore shouldn’t plan on having it available. To me, that’s a pretty high bar to cross with a climate mitigation technology that could be really useful.”
More data on Orca is coming. Climeworks recently announced that it will work with the company Puro.Earth to certify every ton of CO2 that it removes from the atmosphere and stores underground, in order to sell carbon credits based on this service. The credits will be listed on a public registry.
But even if Orca eventually runs at full capacity, Climeworks will never be able to sell 4,000 carbon credits per year from the plant. Gosalvez clarified that 4,000 tons is the amount of carbon the plant is designed to suck up annually, but the more important number is the amount of “net” carbon removal it can produce. “That might be the first bit of education you need to get out there,” she said, “because it really invites everyone to look at what are the key drivers to be paid attention to.”
She walked me through a chart that illustrated the various ways in which some of Orca’s potential to remove carbon can be lost. First, there’s the question of availability — how often does the plant have to shut down due to maintenance or power shortages? Climeworks aims to limit those losses to 10%. Next, there’s the recovery stage, where the CO2 is separated from the sorbent, purified, and liquified. Gosalvez said it’s basically impossible to do this without losing some CO2. At best, the company hopes to limit that to 5%.
Finally, the company also takes into account “gray emissions,” or the carbon footprint associated with the business, like the materials, the construction, and the eventual decommissioning of the plant and restoration of the site to its former state. If one of Climeworks’ plants ever uses energy from fossil fuels (which the company has said it does not plan to do) it would incorporate any emissions from that energy. Climeworks aims to limit gray emissions to 15%.
In the end, Orca’s net annual carbon removal capacity — the amount Climeworks can sell to customers — is really closer to 3,000 tons. Gosalvez hopes other carbon removal companies adopt the same approach. “Ultimately what counts is your net impact on the planet and the atmosphere,” she said.
Get one great climate story in your inbox every day:
Despite being a first-of-its-kind demonstration plant — and an active research site — Orca is also a commercial project. In fact, Gosalvez told me that Orca’s entire estimated capacity for carbon removal, over the 12 years that the plant is expected to run, sold out shortly after it began operating. The company is now selling carbon removal services from its yet-to-be-built Mammoth plant.
In January, Climeworks announced that Orca had officially fulfilled orders from Microsoft, Stripe, and Shopify. Those companies have collectively asked Climeworks to remove more than 16,000 tons of carbon, according to the deal-tracking site cdr.fyi, but it’s unclear what portion of that was delivered. The achievement was verified by a third party, but the total amount removed was not made public.
Climeworks has also not disclosed how much it has charged companies per ton of carbon, a metric that will eventually be an important indicator of whether the technology can scale to a climate-relevant level. But it has provided rough estimates of how much it expects each ton of carbon removal to cost as the technology scales — expectations which seem to have shifted after two years of operating Orca.
In 2021, Climeworks co-founder Jan Wurzbacher said the company aimed to get the cost down to $200 to $300 per ton removed by the end of the decade, with steeper declines in subsequent years. But at the summit in June, he presented a new cost curve chart showing that the price was currently more than $1,000, and that by the end of the decade, it would fall to somewhere between $400 to $700. The range was so large because the cost of labor, energy, and storing the CO2 varied widely by location, he said. The company aims to get the price down to $100 to $300 per ton by 2050, when the technology has significantly matured.
Critics of carbon removal technologies often point to the vast sums flowing into direct air capture tech like Orca, which are unlikely to make a meaningful difference in climate change for decades to come. During a time when worsening disasters make action feel increasingly urgent, many are skeptical of the value of investing limited funds and political energy into these future solutions. Carbon removal won’t make much of a difference if the world doesn’t deploy the tools already available to reduce emissions as rapidly as possible — and there’s certainly not enough money or effort going into that yet.
But we’ll never have the option to fully halt climate change, let alone begin reversing it, if we don’t develop solutions like Orca. In September, the International Energy Agency released an update to its seminal net-zero report. The new analysis said that in the last two years, the world had, in fact, made significant progress on innovation. Now, some 65% of emission reductions after 2030 could be accounted for with technologies that had reached market uptake. It even included a line about the launch of Orca, noting that Climeworks’ direct air capture technology had moved from the prototype to the demonstration stage.
But it cautioned that DAC needs “to be scaled up dramatically to play the role envisaged,” in the net zero scenario. Climeworks’ experience with Orca offers a glimpse of how much work is yet to be done.
Read more about carbon removal:
Log in
To continue reading, log in to your account.
Create a Free Account
To unlock more free articles, please create a free account.
If it turns out to be a bubble, billions of dollars of energy assets will be on the line.
The data center investment boom has already transformed the American economy. It is now poised to transform the American energy system.
Hyperscalers — including tech giants such as Microsoft and Meta, as well as leaders in artificial intelligence like OpenAI and CoreWeave — are investing eyewatering amounts of capital into developing new energy resources to feed their power-hungry data infrastructure. Those data centers are already straining the existing energy grid, prompting widespread political anxiety over an energy supply crisis and a ratepayer affordability shock. Nothing in recent memory has thrown policymakers’ decades-long underinvestment in the health of our energy grid into such stark relief. The commercial potential of next-generation energy technologies such as advanced nuclear, batteries, and grid-enhancing applications now hinge on the speed and scale of the AI buildout.
But what happens if the AI boom buffers and data center investment collapses? It is not idle speculation to say that the AI boom rests on unstable financial foundations. Worse, however, is the fact that as of this year, the tech sector’s breakneck investment into data centers is the only tailwind to U.S. economic growth. If there is a market correction, there is no other growth sector that could pick up the slack.
Not only would a sudden reversal in investor sentiment make stranded assets of the data centers themselves, which will lose value as their lease revenue disappears, it also threatens to strand all the energy projects and efficiency innovations that data center demand might have called forth.
If the AI boom does not deliver, we need a backup plan for energy policy.
An analysis of the capital structure of the AI boom suggests that policymakers should be more concerned about the financial fundamentals of data centers and their tenants — the tech companies that are buoying the economy. My recent report for the Center for Public Enterprise, Bubble or Nothing, maps out how the various market actors in the AI sector interact, connecting the market structure of the AI inference sector to the economics of Nvidia’s graphics processing units, the chips known as GPUs that power AI software, to the data center real estate debt market. Spelling out the core financial relationships illuminates where the vulnerabilities lie.

First and foremost: The business model remains unprofitable. The leading AI companies ― mostly the leading tech companies, as well as some AI-specific firms such as OpenAI and Anthropic ― are all competing with each other to dominate the market for AI inference services such as large language models. None of them is returning a profit on its investments. Back-of-the-envelope math suggests that Meta, Google, Microsoft, and Amazon invested over $560 billion into AI technology and data centers through 2024 and 2025, and have reported revenues of just $35 billion.
To be sure, many new technology companies remain unprofitable for years ― including now-ubiquitous firms like Uber and Amazon. Profits are not the AI sector’s immediate goal; the sector’s high valuations reflect investors’ assumptions about future earnings potential. But while the losses pile up, the market leaders are all vying to maximize the market share of their virtually identical services ― a prisoner’s dilemma of sorts that forces down prices even as the cost of providing inference services continues to rise. Rising costs, suppressed revenues, and fuzzy measurements of real user demand are, when combined, a toxic cocktail and a reflection of the sector’s inherent uncertainty.
Second: AI companies have a capital investment problem. These are not pure software companies; to provide their inference services, AI companies must all invest in or find ways to access GPUs. In mature industries, capital assets have predictable valuations that their owners can borrow against and use as collateral to invest further in their businesses. Not here: The market value of a GPU is incredibly uncertain and, at least currently, remains suppressed due to the sector’s competitive market structure, the physical deterioration of GPUs at high utilization rates, the unclear trajectory of demand, and the value destruction that comes from Nvidia’s now-yearly release of new high-end GPU models.
The tech industry’s rush to invest in new GPUs means existing GPUs lose market value much faster. Some companies, particularly the vulnerable and debt-saddled “neocloud” companies that buy GPUs to rent their compute capacity to retail and hyperscaler consumers, are taking out tens of billions of dollars of loans to buy new GPUs backed by the value of their older GPU stock; the danger of this strategy is obvious. Others including OpenAI and xAI, having realized that GPUs are not safe to hold on one’s balance sheet, are instead renting them from Oracle and Nvidia, respectively.
To paper over the valuation uncertainty of the GPUs they do own, all the hyperscalers have changed their accounting standards for GPU valuations over the past few years to minimize their annual reported depreciation expenses. Some financial analysts don’t buy it: Last year, Barclays analysts judged GPU depreciation as risky enough to merit marking down the earnings estimates of Google (in this case its parent company, Alphabet), Microsoft, and Meta as much as 10%, arguing that consensus modeling was severely underestimating the earnings write-offs required.
Under these market dynamics, the booming demand for high-end chips looks less like a reflection of healthy growth for the tech sector and more like a scramble for high-value collateral to maintain market position among a set of firms with limited product differentiation. If high demand projections for AI technologies come true, collateral ostensibly depreciates at a manageable pace as older GPUs retain their marketable value over their useful life — but otherwise, this combination of structurally compressed profits and rapidly depreciating collateral is evidence of a snake eating its own tail.
All of these hyperscalers are tenants within data centers. Their lack of cash flow or good collateral should have their landlords worried about “tenant churn,” given the risk that many data center tenants will have to undertake multiple cycles of expensive capital expenditure on GPUs and network infrastructure within a single lease term. Data center developers take out construction (or “mini-perm”) loans of four to six years and refinance them into longer-term permanent loans, which can then be packaged into asset-backed and commercial mortgage-backed securities to sell to a wider pool of institutional investors and banks. The threat of broken leases and tenant vacancies threatens the long-term solvency of the leading data center developers ― companies like Equinix and Digital Realty ― as well as the livelihoods of the construction contractors and electricians they hire to build their facilities and manage their energy resources.
Much ink has already been spilled on how the hyperscalers are “roundabouting” each other, or engaging in circular financing: They are making billions of dollars of long-term purchase commitments, equity investments, and project co-development agreements with one another. OpenAI, Oracle, CoreWeave, and Nvidia are at the center of this web. Nvidia has invested $100 billion in OpenAI, to be repaid over time through OpenAI’s lease of Nvidia GPUs. Oracle is spending $40 billion on Nvidia GPUs to power a data center it has leased for 15 years to support OpenAI, for which OpenAI is paying Oracle $300 billion over the next five years. OpenAI is paying CoreWeave over the next five years to rent its Nvidia GPUs; the contract is valued at $11.9 billion, and OpenAI has committed to spending at least $4 billion through April 2029. OpenAI already has a $350 million equity stake in CoreWeave. Nvidia has committed to buying CoreWeave’s unsold cloud computing capacity by 2032 for $6.3 billion, after it already took a 7% stake in CoreWeave when the latter went public. If you’re feeling dizzy, count yourself lucky: These deals represent only a fraction of the available examples of circular financing.
These companies are all betting on each others’ growth; their growth projections and purchase commitments are all dependent on their peers’ growth projections and purchase commitments. Optimistically, this roundabouting represents a kind of “risk mutualism,” which, at least for now, ends up supporting greater capital expenditures. Pessimistically, roundabouting is a way for these companies to pay each other for goods and services in any way except cash — shares, warrants, purchase commitments, token reservations, backstop commitments, and accounts receivable, but not U.S. dollars. The second any one of these companies decides it wants cash rather than a commitment is when the music stops. Chances are, that company needs cash to pay a commitment of its own, likely involving a lender.
Lenders are the final piece of the puzzle. Contrary to the notion that cash-rich hyperscalers can finance their own data center buildout, there has been a record volume of debt issuance this year from companies such as Oracle and CoreWeave, as well as private credit giants like Blue Owl and Apollo, which are lending into the boom. The debt may not go directly onto hyperscalers’ balance sheets, but their purchase commitments are the collateral against which data center developers, neocloud companies like CoreWeave, and private credit firms raise capital. While debt is not inherently something to shy away from ― it’s how infrastructure gets built ― it’s worth raising eyebrows at the role private credit firms are playing at the center of this revenue-free investment boom. They are exposed to GPU financing and to data center financing, although not the GPU producers themselves. They have capped upside and unlimited downside. If they stop lending, the rest of the sector’s risks look a lot more risky.

A market correction starts when any one of the AI companies can’t scrounge up the cash to meet its liabilities and can no longer keep borrowing money to delay paying for its leases and its debts. A sudden stop in lending to any of these companies would be a big deal ― it would force AI companies to sell their assets, particularly GPUs, into a potentially adverse market in order to meet refinancing deadlines. A fire sale of GPUs hurts not just the long-term earnings potential of the AI companies themselves, but also producers such as Nvidia and AMD, since even they would be selling their GPUs into a soft market.
For the tech industry, the likely outcome of a market correction is consolidation. Any widespread defaults among AI-related businesses and special purpose vehicles will leave capital assets like GPUs and energy technologies like supercapacitors stranded, losing their market value in the absence of demand ― the perfect targets for a rollup. Indeed, it stands to reason that the tech giants’ dominance over the cloud and web services sectors, not to mention advertising, will allow them to continue leading the market. They can regain monopolistic control over the remaining consumer demand in the AI services sector; their access to more certain cash flows eases their leverage constraints over the longer term as the economy recovers.
A market correction, then, is hardly the end of the tech industry ― but it still leaves a lot of data center investments stranded. What does that mean for the energy buildout that data centers are directly and indirectly financing?
A market correction would likely compel vertically integrated utilities to cancel plans to develop new combined-cycle gas turbines and expensive clean firm resources such as nuclear energy. Developers on wholesale markets have it worse: It’s not clear how new and expensive firm resources compete if demand shrinks. Grid managers would have to call up more expensive units less frequently. Doing so would constrain the revenue-generating potential of those generators relative to the resources that can meet marginal load more cheaply — namely solar, storage, peaker gas, and demand-response systems. Combined-cycle gas turbines co-located with data centers might be stranded; at the very least, they wouldn’t be used very often. (Peaker gas plants, used to manage load fluctuation, might still get built over the medium term.) And the flight to quality and flexibility would consign coal power back to its own ash heaps. Ultimately, a market correction does not change the broader trend toward electrification.
A market correction that stabilizes the data center investment trajectory would make it easier for utilities to conduct integrated resource planning. But it would not necessarily simplify grid planners’ ability to plan their interconnection queues — phantom projects dropping out of the queue requires grid planners to redo all their studies. Regardless of the health of the investment boom, we still need to reform our grid interconnection processes.
The biggest risk is that ratepayers will be on the hook for assets that sit underutilized in the absence of tech companies’ large load requirements, especially those served by utilities that might be building power in advance of committed contracts with large load customers like data center developers. The energy assets they build might remain useful for grid stability and could still participate in capacity markets. But generation assets built close to data center sites to serve those sites cheaply might not be able to provision the broader energy grid cost-efficiently due to higher grid transport costs incurred when serving more distant sources of load.
These energy projects need not be albatrosses.
Many of these data centers being planned are in the process of securing permits and grid interconnection rights. Those interconnection rights are scarce and valuable; if a data center gets stranded, policymakers should consider purchasing those rights and incentivizing new businesses or manufacturing industries to build on that land and take advantage of those rights. Doing so would provide offtake for nearby energy assets and avoid displacing their costs onto other ratepayers. That being said, new users of that land may not be able to pay anywhere near as much as hyperscalers could for interconnection or for power. Policymakers seeking to capture value from stranded interconnection points must ensure that new projects pencil out at a lower price point.
Policymakers should also consider backstopping the development of critical and innovative energy projects and the firms contracted to build them. I mean this in the most expansive way possible: Policymakers should not just backstop the completion of the solar and storage assets built to serve new load, but also provide exigent purchase guarantees to the firms that are prototyping the flow batteries, supercapacitors, cooling systems, and uninterruptible power systems that data center developers are increasingly interested in. Without these interventions, a market correction would otherwise destroy the value of many of those projects and the earnings potential of their developers, to say nothing of arresting progress on incredibly promising and commercializable technologies.
Policymakers can capture long-term value for the taxpayer by making investments in these distressed projects and developers. This is already what the New York Power Authority has done by taking ownership and backstopping the development of over 7 gigawatts of energy projects ― most of which were at risk of being abandoned by a private sponsor.
The market might not immediately welcome risky bets like these. It is unclear, for instance, what industries could use the interconnection or energy provided to a stranded gigawatt-scale data center. Some of the more promising options ― take aluminum or green steel ― do not have a viable domestic market. Policy uncertainty, tariffs, and tax credit changes in the One Big Beautiful Bill Act have all suppressed the growth of clean manufacturing and metals refining industries like these. The rest of the economy is also deteriorating. The fact that the data center boom is threatened by, at its core, a lack of consumer demand and the resulting unstable investment pathways is itself an ironic miniature of the U.S. economy as a whole.
As analysts at Employ America put it, “The losses in a [tech sector] bust will simply be too large and swift to be neatly offset by an imminent and symmetric boom elsewhere. Even as housing and consumer durables ultimately did well following the bust of the 90s tech boom, there was a one- to two-year lag, as it took time for long-term rates to fall and investors to shift their focus.” This is the issue with having only one growth sector in the economy. And without a more holistic industrial policy, we cannot spur any others.
Questions like these ― questions about what comes next ― suggest that the messy details of data center project finance should not be the sole purview of investors. After all, our exposure to the sector only grows more concentrated by the day. More precisely mapping out how capital flows through the sector should help financial policymakers and industrial policy thinkers understand the risks of a market correction. Political leaders should be prepared to tackle the downside distributional challenges raised by the instability of this data center boom ― challenges to consumer wealth, public budgets, and our energy system.
This sparkling sector is no replacement for industrial policy and macroeconomic investment conditions that create broad-based sources of demand growth and prosperity. But in their absence, policymakers can still treat the challenge of a market correction as an opportunity to think ahead about the nation’s industrial future.
With more electric heating in the Northeast comes greater strains on the grid.
The electric grid is built for heat. The days when the system is under the most stress are typically humid summer evenings, when air conditioning is still going full blast, appliances are being turned on as commuters return home, and solar generation is fading, stretching the generation and distribution grid to its limits.
But as home heating and transportation goes increasingly electric, more of the country — even some of the chilliest areas — may start to struggle with demand that peaks in the winter.
While summer demand peaks are challenging, there’s at least a vision for how to deal with them without generating excessive greenhouse gas emissions — namely battery storage, which essentially holds excess solar power generated in the afternoon in reserve for the evening. In states with lots of renewables on the grid already, like California and Texas, storage has been helping smooth out and avoid reliability issues on peak demand days.
The winter challenge is that you can have long periods of cold weather and little sun, stressing every part of the grid. The natural gas production and distribution systems can struggle in the cold with wellheads freezing up and mechanical failure at processing facilities, just as demand for home heating soars, whether provided by piped gas or electricity generated from gas-fired power plants.
In its recent annual seasonal reliability assessment, the North American Reliability Corporation, a standard-setting body for grid operators, found that “much of North America is again at an elevated risk of having insufficient energy supplies” should it encounter “extreme operating conditions,” i.e. “any prolonged, wide-area cold snaps.”
NERC cited growing electricity demand and the difficulty operating generators in the winter, especially those relying on natural gas. In 2021, Winter Storm Uri effectively shut down Texas’ grid for several days as generation and distribution of natural gas literally froze up while demand for electric heating soared. Millions of Texans were left exposed to extreme low temperatures, and at least 246 died as a result.
Some parts of the country already experience winter peaks in energy demand, especially places like North Carolina and Oregon, which “have winters that are chilly enough to require some heating, but not so cold that electric heating is rare,” in the words of North Carolina State University professor Jeremiah Johnson. "Not too many Mainers or Michiganders heat their homes with electricity,” he said.
But that might not be true for long.
New England may be cold and dark in the winter, but it’s liberal all year round. That means the region’s constituent states have adopted aggressive climate change and decarbonization goals that will stretch their available renewable resources, especially during the coldest days, weeks, and months.
The region’s existing energy system already struggles with winter. New England’s natural gas system is limited by insufficient pipeline capacity, so during particularly cold days, power plants end up burning oil as natural gas is diverted from generating electricity to heating homes.
New England’s Independent System Operator projects that winter demand will peak at just above 21 gigawatts this year — its all-time winter peak is 22.8 gigawatts, summer is 28.1 — which ISO-NE says the region is well-prepared for, with 31 gigawatts of available capacity. That includes energy from the Vineyard Wind offshore wind project, which is still facing activist opposition, as well as imported hydropower from Quebec.
But going forward, with Massachusetts aiming to reduce emissions 50% by 2030 (though state lawmakers are trying to undo that goal) and reach net-zero emissions by 2050 — and nearly the entire region envisioning at least 80% emissions reductions by 2050 — that winter peak is expected to soar. The non-carbon-emitting energy generation necessary to meet that demand, meanwhile, is still largely unbuilt.
By the mid 2030s, ISO-NE expects its winter peak to surpass its summer peak, with peak demand perhaps reaching as high as 57 gigawatts, more than double the system’s all-time peak load. Those last few gigawatts of this load will be tricky — and expensive — to serve. ISO-NE estimates that each gigawatt from 51 to 57 would cost $1.5 billion for transmission expansion alone.
ISO-NE also found that “the battery fleet may be depleted quickly and then struggle to recharge during the winter months,” which is precisely when “batteries may be needed most to fill supply gaps during periods of high demand due to cold weather, as well as periods of low production from wind and solar resources.” Some 600 megawatts of battery storage capacity has come online in the last decade in ISO-NE, and there are state mandates for at least 7 more gigawatts between 2030 and 2033.
There will also be a “continued need for fuel-secure dispatchable resources” through 2050, ISO-NE has found — that is, something to fill the role that natural gas, oil, and even coal play on the coldest days and longest cold stretches of the year.
This could mean “vast quantities of seasonal storage,” like 100-hour batteries, or alternative fuels like synthetic natural gas (produced with a combination of direct air capture and electrolysis, all powered by carbon-free power), hydrogen, biodiesel, or renewable diesel. And this is all assuming a steady buildout of renewable power — including over a gigawatt per year of offshore wind capacity added through 2050 — that will be difficult if not impossible to accomplish given the current policy and administrative roadblocks.
While planning for the transmission and generation system of 2050 may be slightly fanciful, especially as the climate policy environment — and the literal environment — are changing rapidly, grid operators in cold regions are worried about the far nearer term.
From 2027 to 2032, ISO-NE analyses “indicate an increasing energy shortfall risk profile,” said ISO-NE planning official Stephen George in a 2024 presentation.
“What keeps me up at night is the winter of 2032,” Richard Dewey, chief executive of the neighboring New York Independent System Operator, said at a 2024 conference. “I don’t know what fills that gap in the year 2032.”
The future of the American electric grid is being determined in the docket of the Federal Energy Regulatory Commission.
The Trump administration tasked federal energy regulators last month to come up with new rules that would allow large loads — i.e. data centers — to connect to the grid faster without ballooning electricity bills. The order has set off a flurry of reactions, as the major players in the electricity system — the data center developers, the power producers, the utilities — jockey to ensure that any new rules don’t impinge upon their business models. The initial public comment period closed last week, meaning now FERC will have to go through hundreds of comments from industry, government, and advocacy stakeholders, hoping to help shape the rule before it’s released at the end of April.
They’ll have a lot to sift through. Opinions ranged from skeptical to cautiously supportive to fully supportive, with imperfect alignment among trade groups and individual companies.
The Utilities
When the DOE first asked FERC to get to work on a rule, several experts identified a possible conflict with utilities, namely the idea that data centers “should be responsible for 100% of the network upgrades that they are assigned through the interconnection studies.” Utilities typically like to put new transmission into their rate base, where they can earn a regulated rate of return on their investments that’s recouped from payments from all their customers. And lo, utilities were largely skeptical of the exercise.
The Edison Electric Institute, which represents investor-owned utilities, wrote in its comments to FERC that the new rule should require large load customers to pay for their share of the transmission system costs, i.e. not the full cost of network upgrades.
EEI claimed that these network costs can add up to the “tens to hundreds of millions of dollars” that should be assigned in a way that allows utilities “to earn a return of and on the entirety of the transmission network.”
In short, the utilities are defending something like the traditional model, where utilities connect all customers and spread out the costs of doing so among the entire customer base. That model has come under increasing stress thanks to the flood of data center interconnection requests, however. The high costs in some markets, like PJM, have also led some scholars and elected officials to seriously reconsider the nature of utility regulation. Still, that model has been largely good for the utilities — and they show no sign of wanting to give it up.
The Hyperscalers
The biggest technology companies, like Google, Microsoft, and Meta, and their trade groups want to make sure their ability to connect to the grid will not be impeded by new rules.
Ari Peskoe, an energy law professor at Harvard Law School, told me that existing processes for interconnection are likely working out well for the biggest data center developers and they may not be eager to rock the boat with a federal overhaul. “Presumably utilities are lining up to do deals with them because they have so much money,” Peskoe said.
In its letter to FERC, the DOE suggested that the commission could expedite interconnection of large loads “that agree to be curtailable.” That would entail users of a lot of electricity ramping down use while the grid was under stress, as well as co-locating projects with new sources of energy generation that could serve the grid as a whole. This approach has picked up steam among researchers and some data center developers, although with some cautions and caveats.
The Clean Energy Buyers Association, which represents many large technology companies, wrote in its comment that such flexibility should be “structured to enable innovation and competition through voluntary pathways rather than mandates,” echoing criticism of a proposal by the electricity market PJM Interconnection that could have forced large loads to be eligible for curtailment.
The Data Center Coalition, another big tech trade group representing many key players in the data center industry, emphasized throughout their comment that any reform to interconnection should still allow data centers to simply connect to the grid, without requiring or unduly favoring “hybrid” or co-location approaches.
“Timely, predictable, and nondiscriminatory access to interconnection service for stand-alone load is… critical… to the continued functioning of the market itself,” the Data Center Coalition wrote.
The hyperscalers themselves largely echoed this message, albeit with some differences in emphasis. They did not want any of their existing arrangements — which have allowed breakneck data center development — to be disrupted or to be forced into operating their data centers in any particular fashion.
Microsoft wrote that it was in favor of “voluntarily curtailable loads,” but cautioned that “most data centers today have limited curtailment capability,” and worried about “operational reliability risks.” In short, don’t railroad us into something our data centers aren’t really set up to do.
OpenAI wrote a short comment, likely its first ever appearance in a FERC docket, where it argued for “an optional curtailable-load pathway” that would allow for faster interconnection, echoing comments it had made in a letter to the White House.
Meta, meanwhile, argued against any binding rule at all, saying instead that FERC “should consider adopting guidance, best practices, and, if appropriate, minimum standards for large load interconnection rather than promulgating a binding, detailed rule.” After all, its deploying data centers gigawatts at a time and has been able to reach deals with utilities to secure power.
The Generators
Perhaps the most fulsome support for the broadest version of the DOE’s proposal came from the generators. The Electrical Power Supply Association, an independent power producer trade group, wrote that more standardized, transparent “rules of the road” are needed to allow large loads like data centers “to interconnect to the transmission system efficiently and fairly, and to be able to do so quickly.” It also called on FERC to speed up its reviews of interconnection requests.
Constellation, which operates a 32-gigawatt generation fleet with a large nuclear business, said that it “agrees with the motivations and principles outlined in the [Department of Energy’s proposal] and the need for clear rules to allow the timely interconnection of large loads and their co-location with generators.” It also called for faster implementation of large load interconnection principles in PJM, the nation’s largest electricity market, “where data center development has been stymied by disagreements and uncertainty over who controls the timing and nature of large load interconnections, and over the terms of any ensuing transmission service.” Constellation specifically called out utilities for excessive influence over PJM rulemaking and procedures.
Constellation’s stance shouldn’t be surprising, Peskoe told me. From the perspective of independent power producers, enabling data centers to quickly and directly work with regional transmission organizations and generators to come online is “generally going to be better for the generators,” Peskoe said, while utilities “want to be the gatekeeper.”
In the end, the fight over data center interconnection may not have much to do with data centers — it’s just one battle after another between generators and utilities.