You’re out of free articles.
Log in
To continue reading, log in to your account.
Create a Free Account
To unlock more free articles, please create a free account.
Sign In or Create an Account.
By continuing, you agree to the Terms of Service and acknowledge our Privacy Policy
Welcome to Heatmap
Thank you for registering with Heatmap. Climate change is one of the greatest challenges of our lives, a force reshaping our economy, our politics, and our culture. We hope to be your trusted, friendly, and insightful guide to that transformation. Please enjoy your free articles. You can check your profile here .
subscribe to get Unlimited access
Offer for a Heatmap News Unlimited Access subscription; please note that your subscription will renew automatically unless you cancel prior to renewal. Cancellation takes effect at the end of your current billing period. We will let you know in advance of any price changes. Taxes may apply. Offer terms are subject to change.
Subscribe to get unlimited Access
Hey, you are out of free articles but you are only a few clicks away from full access. Subscribe below and take advantage of our introductory offer.
subscribe to get Unlimited access
Offer for a Heatmap News Unlimited Access subscription; please note that your subscription will renew automatically unless you cancel prior to renewal. Cancellation takes effect at the end of your current billing period. We will let you know in advance of any price changes. Taxes may apply. Offer terms are subject to change.
Create Your Account
Please Enter Your Password
Forgot your password?
Please enter the email address you use for your account so we can send you a link to reset your password:
If it turns out to be a bubble, billions of dollars of energy assets will be on the line.

The data center investment boom has already transformed the American economy. It is now poised to transform the American energy system.
Hyperscalers — including tech giants such as Microsoft and Meta, as well as leaders in artificial intelligence like OpenAI and CoreWeave — are investing eyewatering amounts of capital into developing new energy resources to feed their power-hungry data infrastructure. Those data centers are already straining the existing energy grid, prompting widespread political anxiety over an energy supply crisis and a ratepayer affordability shock. Nothing in recent memory has thrown policymakers’ decades-long underinvestment in the health of our energy grid into such stark relief. The commercial potential of next-generation energy technologies such as advanced nuclear, batteries, and grid-enhancing applications now hinge on the speed and scale of the AI buildout.
But what happens if the AI boom buffers and data center investment collapses? It is not idle speculation to say that the AI boom rests on unstable financial foundations. Worse, however, is the fact that as of this year, the tech sector’s breakneck investment into data centers is the only tailwind to U.S. economic growth. If there is a market correction, there is no other growth sector that could pick up the slack.
Not only would a sudden reversal in investor sentiment make stranded assets of the data centers themselves, which will lose value as their lease revenue disappears, it also threatens to strand all the energy projects and efficiency innovations that data center demand might have called forth.
If the AI boom does not deliver, we need a backup plan for energy policy.
An analysis of the capital structure of the AI boom suggests that policymakers should be more concerned about the financial fundamentals of data centers and their tenants — the tech companies that are buoying the economy. My recent report for the Center for Public Enterprise, Bubble or Nothing, maps out how the various market actors in the AI sector interact, connecting the market structure of the AI inference sector to the economics of Nvidia’s graphics processing units, the chips known as GPUs that power AI software, to the data center real estate debt market. Spelling out the core financial relationships illuminates where the vulnerabilities lie.

First and foremost: The business model remains unprofitable. The leading AI companies ― mostly the leading tech companies, as well as some AI-specific firms such as OpenAI and Anthropic ― are all competing with each other to dominate the market for AI inference services such as large language models. None of them is returning a profit on its investments. Back-of-the-envelope math suggests that Meta, Google, Microsoft, and Amazon invested over $560 billion into AI technology and data centers through 2024 and 2025, and have reported revenues of just $35 billion.
To be sure, many new technology companies remain unprofitable for years ― including now-ubiquitous firms like Uber and Amazon. Profits are not the AI sector’s immediate goal; the sector’s high valuations reflect investors’ assumptions about future earnings potential. But while the losses pile up, the market leaders are all vying to maximize the market share of their virtually identical services ― a prisoner’s dilemma of sorts that forces down prices even as the cost of providing inference services continues to rise. Rising costs, suppressed revenues, and fuzzy measurements of real user demand are, when combined, a toxic cocktail and a reflection of the sector’s inherent uncertainty.
Second: AI companies have a capital investment problem. These are not pure software companies; to provide their inference services, AI companies must all invest in or find ways to access GPUs. In mature industries, capital assets have predictable valuations that their owners can borrow against and use as collateral to invest further in their businesses. Not here: The market value of a GPU is incredibly uncertain and, at least currently, remains suppressed due to the sector’s competitive market structure, the physical deterioration of GPUs at high utilization rates, the unclear trajectory of demand, and the value destruction that comes from Nvidia’s now-yearly release of new high-end GPU models.
The tech industry’s rush to invest in new GPUs means existing GPUs lose market value much faster. Some companies, particularly the vulnerable and debt-saddled “neocloud” companies that buy GPUs to rent their compute capacity to retail and hyperscaler consumers, are taking out tens of billions of dollars of loans to buy new GPUs backed by the value of their older GPU stock; the danger of this strategy is obvious. Others including OpenAI and xAI, having realized that GPUs are not safe to hold on one’s balance sheet, are instead renting them from Oracle and Nvidia, respectively.
To paper over the valuation uncertainty of the GPUs they do own, all the hyperscalers have changed their accounting standards for GPU valuations over the past few years to minimize their annual reported depreciation expenses. Some financial analysts don’t buy it: Last year, Barclays analysts judged GPU depreciation as risky enough to merit marking down the earnings estimates of Google (in this case its parent company, Alphabet), Microsoft, and Meta as much as 10%, arguing that consensus modeling was severely underestimating the earnings write-offs required.
Under these market dynamics, the booming demand for high-end chips looks less like a reflection of healthy growth for the tech sector and more like a scramble for high-value collateral to maintain market position among a set of firms with limited product differentiation. If high demand projections for AI technologies come true, collateral ostensibly depreciates at a manageable pace as older GPUs retain their marketable value over their useful life — but otherwise, this combination of structurally compressed profits and rapidly depreciating collateral is evidence of a snake eating its own tail.
All of these hyperscalers are tenants within data centers. Their lack of cash flow or good collateral should have their landlords worried about “tenant churn,” given the risk that many data center tenants will have to undertake multiple cycles of expensive capital expenditure on GPUs and network infrastructure within a single lease term. Data center developers take out construction (or “mini-perm”) loans of four to six years and refinance them into longer-term permanent loans, which can then be packaged into asset-backed and commercial mortgage-backed securities to sell to a wider pool of institutional investors and banks. The threat of broken leases and tenant vacancies threatens the long-term solvency of the leading data center developers ― companies like Equinix and Digital Realty ― as well as the livelihoods of the construction contractors and electricians they hire to build their facilities and manage their energy resources.
Much ink has already been spilled on how the hyperscalers are “roundabouting” each other, or engaging in circular financing: They are making billions of dollars of long-term purchase commitments, equity investments, and project co-development agreements with one another. OpenAI, Oracle, CoreWeave, and Nvidia are at the center of this web. Nvidia has invested $100 billion in OpenAI, to be repaid over time through OpenAI’s lease of Nvidia GPUs. Oracle is spending $40 billion on Nvidia GPUs to power a data center it has leased for 15 years to support OpenAI, for which OpenAI is paying Oracle $300 billion over the next five years. OpenAI is paying CoreWeave over the next five years to rent its Nvidia GPUs; the contract is valued at $11.9 billion, and OpenAI has committed to spending at least $4 billion through April 2029. OpenAI already has a $350 million equity stake in CoreWeave. Nvidia has committed to buying CoreWeave’s unsold cloud computing capacity by 2032 for $6.3 billion, after it already took a 7% stake in CoreWeave when the latter went public. If you’re feeling dizzy, count yourself lucky: These deals represent only a fraction of the available examples of circular financing.
These companies are all betting on each others’ growth; their growth projections and purchase commitments are all dependent on their peers’ growth projections and purchase commitments. Optimistically, this roundabouting represents a kind of “risk mutualism,” which, at least for now, ends up supporting greater capital expenditures. Pessimistically, roundabouting is a way for these companies to pay each other for goods and services in any way except cash — shares, warrants, purchase commitments, token reservations, backstop commitments, and accounts receivable, but not U.S. dollars. The second any one of these companies decides it wants cash rather than a commitment is when the music stops. Chances are, that company needs cash to pay a commitment of its own, likely involving a lender.
Lenders are the final piece of the puzzle. Contrary to the notion that cash-rich hyperscalers can finance their own data center buildout, there has been a record volume of debt issuance this year from companies such as Oracle and CoreWeave, as well as private credit giants like Blue Owl and Apollo, which are lending into the boom. The debt may not go directly onto hyperscalers’ balance sheets, but their purchase commitments are the collateral against which data center developers, neocloud companies like CoreWeave, and private credit firms raise capital. While debt is not inherently something to shy away from ― it’s how infrastructure gets built ― it’s worth raising eyebrows at the role private credit firms are playing at the center of this revenue-free investment boom. They are exposed to GPU financing and to data center financing, although not the GPU producers themselves. They have capped upside and unlimited downside. If they stop lending, the rest of the sector’s risks look a lot more risky.

A market correction starts when any one of the AI companies can’t scrounge up the cash to meet its liabilities and can no longer keep borrowing money to delay paying for its leases and its debts. A sudden stop in lending to any of these companies would be a big deal ― it would force AI companies to sell their assets, particularly GPUs, into a potentially adverse market in order to meet refinancing deadlines. A fire sale of GPUs hurts not just the long-term earnings potential of the AI companies themselves, but also producers such as Nvidia and AMD, since even they would be selling their GPUs into a soft market.
For the tech industry, the likely outcome of a market correction is consolidation. Any widespread defaults among AI-related businesses and special purpose vehicles will leave capital assets like GPUs and energy technologies like supercapacitors stranded, losing their market value in the absence of demand ― the perfect targets for a rollup. Indeed, it stands to reason that the tech giants’ dominance over the cloud and web services sectors, not to mention advertising, will allow them to continue leading the market. They can regain monopolistic control over the remaining consumer demand in the AI services sector; their access to more certain cash flows eases their leverage constraints over the longer term as the economy recovers.
A market correction, then, is hardly the end of the tech industry ― but it still leaves a lot of data center investments stranded. What does that mean for the energy buildout that data centers are directly and indirectly financing?
A market correction would likely compel vertically integrated utilities to cancel plans to develop new combined-cycle gas turbines and expensive clean firm resources such as nuclear energy. Developers on wholesale markets have it worse: It’s not clear how new and expensive firm resources compete if demand shrinks. Grid managers would have to call up more expensive units less frequently. Doing so would constrain the revenue-generating potential of those generators relative to the resources that can meet marginal load more cheaply — namely solar, storage, peaker gas, and demand-response systems. Combined-cycle gas turbines co-located with data centers might be stranded; at the very least, they wouldn’t be used very often. (Peaker gas plants, used to manage load fluctuation, might still get built over the medium term.) And the flight to quality and flexibility would consign coal power back to its own ash heaps. Ultimately, a market correction does not change the broader trend toward electrification.
A market correction that stabilizes the data center investment trajectory would make it easier for utilities to conduct integrated resource planning. But it would not necessarily simplify grid planners’ ability to plan their interconnection queues — phantom projects dropping out of the queue requires grid planners to redo all their studies. Regardless of the health of the investment boom, we still need to reform our grid interconnection processes.
The biggest risk is that ratepayers will be on the hook for assets that sit underutilized in the absence of tech companies’ large load requirements, especially those served by utilities that might be building power in advance of committed contracts with large load customers like data center developers. The energy assets they build might remain useful for grid stability and could still participate in capacity markets. But generation assets built close to data center sites to serve those sites cheaply might not be able to provision the broader energy grid cost-efficiently due to higher grid transport costs incurred when serving more distant sources of load.
These energy projects need not be albatrosses.
Many of these data centers being planned are in the process of securing permits and grid interconnection rights. Those interconnection rights are scarce and valuable; if a data center gets stranded, policymakers should consider purchasing those rights and incentivizing new businesses or manufacturing industries to build on that land and take advantage of those rights. Doing so would provide offtake for nearby energy assets and avoid displacing their costs onto other ratepayers. That being said, new users of that land may not be able to pay anywhere near as much as hyperscalers could for interconnection or for power. Policymakers seeking to capture value from stranded interconnection points must ensure that new projects pencil out at a lower price point.
Policymakers should also consider backstopping the development of critical and innovative energy projects and the firms contracted to build them. I mean this in the most expansive way possible: Policymakers should not just backstop the completion of the solar and storage assets built to serve new load, but also provide exigent purchase guarantees to the firms that are prototyping the flow batteries, supercapacitors, cooling systems, and uninterruptible power systems that data center developers are increasingly interested in. Without these interventions, a market correction would otherwise destroy the value of many of those projects and the earnings potential of their developers, to say nothing of arresting progress on incredibly promising and commercializable technologies.
Policymakers can capture long-term value for the taxpayer by making investments in these distressed projects and developers. This is already what the New York Power Authority has done by taking ownership and backstopping the development of over 7 gigawatts of energy projects ― most of which were at risk of being abandoned by a private sponsor.
The market might not immediately welcome risky bets like these. It is unclear, for instance, what industries could use the interconnection or energy provided to a stranded gigawatt-scale data center. Some of the more promising options ― take aluminum or green steel ― do not have a viable domestic market. Policy uncertainty, tariffs, and tax credit changes in the One Big Beautiful Bill Act have all suppressed the growth of clean manufacturing and metals refining industries like these. The rest of the economy is also deteriorating. The fact that the data center boom is threatened by, at its core, a lack of consumer demand and the resulting unstable investment pathways is itself an ironic miniature of the U.S. economy as a whole.
As analysts at Employ America put it, “The losses in a [tech sector] bust will simply be too large and swift to be neatly offset by an imminent and symmetric boom elsewhere. Even as housing and consumer durables ultimately did well following the bust of the 90s tech boom, there was a one- to two-year lag, as it took time for long-term rates to fall and investors to shift their focus.” This is the issue with having only one growth sector in the economy. And without a more holistic industrial policy, we cannot spur any others.
Questions like these ― questions about what comes next ― suggest that the messy details of data center project finance should not be the sole purview of investors. After all, our exposure to the sector only grows more concentrated by the day. More precisely mapping out how capital flows through the sector should help financial policymakers and industrial policy thinkers understand the risks of a market correction. Political leaders should be prepared to tackle the downside distributional challenges raised by the instability of this data center boom ― challenges to consumer wealth, public budgets, and our energy system.
This sparkling sector is no replacement for industrial policy and macroeconomic investment conditions that create broad-based sources of demand growth and prosperity. But in their absence, policymakers can still treat the challenge of a market correction as an opportunity to think ahead about the nation’s industrial future.
Log in
To continue reading, log in to your account.
Create a Free Account
To unlock more free articles, please create a free account.
A conversation with anti-tech extremism researcher Mauro Lubrano on Sam Altman, Tesla protests, and 5G.
A spate of headline-grabbing attacks motivated by anxiety over artificial intelligence have rattled nerves across the U.S.
On Friday, I wrote a story about whether developers should be worried about violence after a shooting in Indiana targeted a city councilman who had voted in favor of a local data center. Almost at the same time the story published, news broke that an attacker had attempted to firebomb OpenAI CEO Sam Altman’s house. On Monday, the Justice Department filed charges against a 20-year-old from Texas for allegedly throwing a Molotov cocktail at the AI executive’s house. The Houston Chronicle reported that the individual charged had a Substack where they posted several anti-AI screeds; while I have reviewed the blog and can verify it exists, I cannot confirm the author’s connection to the individual charged.
As if that wasn’t enough, just days after the alleged firebombing, two people shot at Altman’s house.
To attempt to make sense of such chaotic brutality, I spoke with Mauro Lubrano, a lecturer at the University of Bath in the United Kingdom and author of the new book Stop the Machines: The Rise of Anti-Tech Extremism. Lubrano has for much of his career studied the rise of a global decentralized movement against tech infrastructure, including energy and transportation systems. Last year, for example, he published a detailed examination of the spate of attacks against Tesla vehicles, dealerships, and factories, calling them “insurrectionary anarchism” rooted in “anti-tech extremism” that “spans multiple ideologies — from eco-extremism to eco-fascism.”
Lubrano and I discussed how a prevailing pessimism about the future, AI acceleration, and climate anxiety is making people more likely to launch physical attacks on devices representing a perceived techno-apocalypse. Lubrano said we should expect more people to attack things linked to electricity itself, and that the solution to the violence is not eco-modernism or optimistic thinking, but rather society finally working through the hard questions raised by AI, climate change, economic inequality, and the other ills vexing so many today.
The following conversation was lightly edited and condensed for clarity.
We’ve seen these movements against tech infrastructure — attacks, threats — for a while. The concept goes back a long time. For a lot of folks in the U.S., there’s analogues here ranging from the assassination of the UnitedHealthcare CEO to ecoterrorism attacks on pipelines and other forms of energy infrastructure. How would you characterize the forces driving these recent attacks on executives and politicians supporting AI data centers?
When we look at anti-technology violence, we tend to see two main patterns of violence: attacks on tech executives, personalities, and so on; and attacks on critical infrastructure. This is related to a worldview that technology is not a collection of individual devices, but part of an interconnected system. Some anti-tech extremists will refer to the “mega-machine,” one that has three main manifestations. There’s an ideological one — the general idea that progress is inherently good. There’s the material manifestation, which is the technologies we interact with every day. And there’s the human component. People become cogs. So by targeting cogs in the machine, you contribute to the collapse of the machine itself.
There’s a propaganda element to all of this, too, targeting individuals who for one reason or another are prominent so it sends shockwaves to the tech community, to make some people change minds or join them in their anti-tech fight, or to just deter people from pursuing research on technology.
Then there’s also critical infrastructure. It comes back to this vision of the mega-machine, where instead of targeting individual technologies you target those critical for the machine to function. They want to strike those first because they will create a domino effect, where they affect all the technologies and the collapse of the system. You will find the attacks tend to cluster around specific targets.
How do you define technology here? Do you mean any kind of tech application? I’m hearing what you’re saying and thinking this may apply to more than AI.
Oh, of course. It’s not just AI. When these people think of technology they are not just thinking of devices but know-how, the ideology of progress, of social forces shaping society and how it works and how labor is organized. Technology is a complex entity, in a way.
In the early 2010s, for example, you saw attacks on facilities after the Fukushima Daiichi disaster. More recently, you had attacks on companies making semiconductors and microchips, so if you take out microchips you cripple the system. And data centers have been discussed for quite some time — I wouldn’t be surprised if we see something happen there, as well. It’s about identifying technologies that all other tech depends on.
There’s an argument some of them make that there’s only one technology all the other depend on, which is electricity. That’s why we’ve seen attacks on power plants, on different targets related to power.
Are you speaking about organized groups? Discussions and forums? I’m sure you’re referencing people you know of, but help us get a better understanding.
When we look at the violent side of the coin we need to acknowledge first that these networks, these movements, reflect trends we’ve seen in political violence over the last few decades, trends that show us we’re in a post-organizational era of political violence. We have names, we have acronyms, but these names are not as important as they used to be. These are decentralized networks, often leaderless, that operate without solid hierarchies or chains of control. We’re not talking about organizations like Al-Qaeda or the Irish Republican Army. We’re talking about networks in which militants often do not know each other because they interact online.
Some of the networks that have been involved in these kinds of attacks are the Informal Anarchist Federation. It formed in 2003 in Italy and became a global entity around 2011. There’s the Conspiracy of Fire Nuclei, which emerged in Greece and then became international. And then there’s a series of ad hoc groups that have emerged over the decades, sometimes who are only known because they’ll release a communique after an attack. Like there’s Vulkan Group, which has carried out a series of attacks on Tesla factories in Germany. Or Individualists Tending to the Wild.
An affiliation to a network is not motivated by gaining material or support or leadership. It’s almost an identity factor because again, when these individuals carry out attacks on their own, they don’t rely on existing networks for support. They might also only be around for one or two attacks because it’s not the group that matters — it’s the network.
Is it just the rise of modern technology driving this violence? Are there other factors at play inciting events, creating this current wave of attacks?
One of the remarkable qualities of anti-tech extremism is that it’s quite flexible. The way this decentralized system works, especially on the anarchist or eco-extremist side, is one side will carry out an attack in a communique they publish online and then make a call for similar attacks on similar targets. Whether or not attacks occur is up to others in the network. If a campaign is considered not really appealing, this might not take place. If instead it’s deemed appealing, you’ll see more attacks.
Last year there was a campaign a French group started called Welcome Spring, Burn a Tesla, which resulted across Europe in a lot of Tesla dealerships being torched. There was some confusion because there was also a campaign against Elon Musk and Tesla, but that wasn’t carried out by people motivated by anti-tech violence, but instead Musk’s role in the U.S. government.
There can also be things people say that incite. In this case, there was an interview recently where Sam Altman basically said if AI is going to steal all the jobs, then maybe those jobs weren’t “real” in the first place. That type of statement is likely to make a few people annoyed. It’s hard to consider what type of development might constitute a catalyst for violence.
I’m struck by the way you’re describing this movement and the rhetoric and signals. I think about Alex Jones and, for example, the idea that 5G is going to brainwash people on behalf of globalists. Do you see anything in global politics providing kindling to this fire?
This is an interesting question because conspiracy thinking is widespread amongst these groups, that there’s this obscure force at work determining outcomes. But on the other hand it depends. In certain groups of people, there’s such a rejection to anything conventional that you’d find disagreement between those people and the political figures. In others, you might argue influencers or politicians who spread rumors about COVID vaccines or 5G that this idea resonates. For example, I don’t see anarchists paying attention to what a politician says because they’re a part of the problem to begin with.
What can be done to counterbalance this? Is there an oppositional force against this rising tide of anti-tech violence? I’ve been stunned to see the absence of any widespread outrage online at what’s transpired so far. Almost all the commentary has been “good, I’m glad this is happening.”
I’m not surprised you’re saying this about the commentary. I’ve been researching violence for years now, but this is the first time I’ve seen the narratives of extremists reflecting some objective concerns amongst people. It doesn’t mean all those other people are participating in the violence themselves, but concerns about AI are real. People are afraid and scared of these developments they don’t understand. But what they do understand is that it’ll have impacts on their lives, to the extent they’re able to comprehend it.
I think demonizing these concerns driving the violence would be a very foolish thing to do. It’ll confirm narratives of surveillance and control.
Right. I mean, some of these are valid concerns. Water, electricity, job loss, surveillance. All of that. But if demonizing this isn’t the right call, what can be done?
Short term, don’t securitize these concerns but do something to limit the violent manifestations. Most of the solutions will be long term. That’s not what people want. People want solutions with immediate effect.
You can divide the solutions into two groups. The first one is, stakeholders and those who develop technologies have to be responsibilized. Going back to that Altman interview, these kinds of comments are not doing us a favor in trying to solve the violence — not to mention other stakeholders can be even more incendiary. You can also limit the problem in how the technologies are used. If we see AI is used to monitor people at protests and demonstrations, acquire and execute attacks in warfare, it can only get worse from here. These applications of AI don’t do us a favor.
Then on a philosophical level, we all need to change the way we relate to technology. We need to go from a position where we think, “What does this allow me to do?” We need to instead think, “Within those activities, let’s select those that will further our connections with one another and with nature.”
What about eco-modernism? Techno-optimism? Are those ideologies solutions or antidotes? Or are they inadequate to address the sheer degree of pessimism and anxiety driving this violence?
From what I can see, doomerism and pessimism is now so widespread that I don’t think those ideologies can work. A lot of people in younger generations believe we are doomed. They believe climate change is going to ruin our lives. There’s wars, geopolitical conflicts. We’re stuck with dystopian visions of the future. This isn’t confined to anti-tech stuff, so therefore optimism has very limited effects.
What gives you hope?
That’s funny because I’m working on a project that concludes there’s no hope.
I didn’t think that was going to be a hard question.
There’s a growing acknowledgement that people may be too dependent on technology. Hopefully we’ll manage to be less dependent on technology and more conscious of what it’s doing to us. An awareness that AI has tremendous environmental impacts.
With acknowledgement is where you need to start. That’s the little hope I have.
Current conditions: A wave of summer heat is headed for the East Coast, with midweek temperatures surpassing 90 degrees Fahrenheit in Washington, D.C. • Guam and the Northern Mariana Islands are bracing for winds of up to 190 miles per hour as Super Typhoon Sinlaku bears down on the U.S. territories • At least 30 people have died in floods in Yemen, which just recorded its highest rainfall in five years.
The Trump administration is holding up some funding for grants at the National Oceanic and Atmospheric Administration, The Hill reported. On April 1, the University of Colorado put out a statement saying that a federal pause on funding had put scientists who collect data about the atmosphere “at risk for elimination” after the White House Office of Management and Budget had “not released these funds.” The university’s Cooperative Institute for Research in Environmental Sciences said that roughly 30 days before running out of funds to pay scientists, “we were informed that NOAA has put a pause on all grant actions.”
As I told you back in December, the Trump administration is also working to dismantle the National Center for Atmospheric Research in Colorado, an institution credited with many of the biggest scientific breakthroughs in our understanding of weather and climate over the past 66 years since its founding. In a post on X at the time, Russell Vought, the director of the White House’s Office of Management and Budget, called the institute “one of the largest sources of climate alarmism in the country,” and said the administration would be “breaking up” its operations.
Secretary of Energy Chris Wright is scheduled to testify Wednesday morning before the House Committee on Appropriations to defend the White House’s latest budget request for his agency. He’s not the only chieftain of a federal agency with relevance to Heatmap readers who’s coming before Congress this week.
U.S. Customs and Border Protection plans to launch the first phase of what’s called the Consolidated Administration and Processing of Entries tool in the agency’s automated commercial secure data portal to allow companies to request refunds of Trump administration tariffs the U.S. Supreme Court ruled unlawful earlier this year. Solar companies are among the thousands of American businesses that filed complaints with the U.S. Court of International Trade for refunds prior to the Supreme Court’s ruling. Those, according to Solar Power World, include American Wire Group, Canadian Solar, GameChange Solar, Fluke, Hellerman Tyton, Kinematics, JA Solar, Jinko Solar, Longi, Merlin Solar, Qcells, and Trina Solar.
Sign up to receive Heatmap AM in your inbox every morning:

Established in early 2021, California Community Power is a quasi-governmental organization formed out of nine power providers across the Golden State. On Monday, the agency inked a series of deals with geothermal power developers to expand what’s widely considered one of the most promising clean-energy sources for California, which has some of the continent’s best hot-rock resources. XGS Energy, the Houston-based startup promising to build next-generation closed-loop geothermal systems, announced a deal to build 115 megawatts of power in the state. Zanskar, the geothermal company using AI to locate untapped conventional geothermal resources, also signed an agreement with the agency.
Zanskar in particular ranked among the most promising climate-tech startups on the U.S. market in Heatmap’s poll of experts earlier this year. The company last year announced its biggest find yet, Heatmap’s Katie Brigham reported last year. XGS, meanwhile, is drawing support from the nuclear industry, as I previously reported for Heatmap.
The developer behind a major Massachusetts offshore wind farm is suing its turbine manufacturer in a bid to keep the company from backing out of the project. By February, the Vineyard Wind project off Cape Cod had installed 60 of the project’s 62 turbines, as I reported at the time. Yet the parent company behind GE Renewables, the maker of the project’s turbines, said “it would be terminating its contracts for turbine services and maintenance at the end of April,” the Associated Press reported. GE Vernova, the parent company, says Vineyard Wind owes it $300 million already.
The war in Iran is taking a toll on Central African minerals. Miners in the Democratic Republic of the Congo are curbing output of copper and cobalt as the war cuts supplies of sulfuric acid needed for leaching minerals out of rock, Reuters reported. Mine managers are reducing cobalt production to conserve chemicals.
The deal represents one of the largest public-private partnerships in the history of the national labs.
I’ll admit, I thought I might be done covering fresh fusion startups for a while. In the U.S., at least, the number of new industry entrants has slowed, and most venture capital now flows towards more established players such as Commonwealth Fusion Systems and Helion. But in February, a startup called Inertia Enterprises made headlines with its $450 million Series A raise. It’s aiming to commercialize fusion using the physics pioneered at Lawrence Livermore National Laboratory, the only place yet to achieve scientific breakeven — the point at which a fusion reaction produces more energy than it took to initiate it.
That achievement first came in 2022 at the lab’s National Ignition Facility in Berkeley, California. On Tuesday, Inertia announced that it’s deepening its partnership with Lawrence Livermore, creating one of the largest private sector-led partnerships in the history of the national lab system. This collaboration involves three separate agreements that allow Inertia to work directly with the lab’s employees on research and development, while also giving the startup access to nearly 200 Lawrence Livermore patents covering fusion technology.
The startup’s team isn’t merely a group of enthusiasts galvanized by the national lab’s fusion milestone. Alongside Twilio’s former CEO Jeff Lawson and fusion power plant designer Mike Dunne, Inertia’s other co-founders is Annie Kritcher, a senior employee at Lawrence Livermore who has led the physics design for NIF’s fusion energy experiments since 2019.
“We’re not starting from zero,” Kritcher told me, putting it mildly. “And that was really, really important to me when I decided to co-found this company.” Or as Lawson told me after the company’s fundraise in February, “the government put 60 years and $30 billion into NIF trying to get that thing to work.”
The technical approach pursued by Lawrence Livermore — and now by Inertia — is called inertial confinement fusion. In this system, high-powered lasers are directed at a millimeter-scale pellet of fusion fuel, typically a mixture of the hydrogen isotopes deuterium and tritium. The laser energy rapidly compresses and heats the pellet to extreme temperatures and pressures, driving the nuclei to fuse and releasing enormous amounts of energy. But NIF didn’t build its system for commercial purposes. Rather, its primary mission is to support the domestic nuclear weapons stockpile by recreating the extreme conditions inside a nuclear detonation, allowing scientists to study how U.S. weapons perform without conducting explosive tests.
To translate the lab’s research into a commercially viable device, Kritcher explained, Inertia must significantly increase the lasers’ efficiency and power output, targeting a system roughly 50 times more powerful than existing lasers of its class. The startup is also working to scale production of its fusion targets to drive down costs and enable mass manufacturing.
Inertia is not the only company attempting to commercialize this general approach, however. Back in 2021, as Lawrence Livermore moved closer to its breakeven moment, the future founders of the startup Xcimer Energy were taking note. Convinced that the fundamental physics of inertial confinement had been proven, they thought, “if we’re going to do this, we have to do it now,” Xcimer's CTO, Alexander Valys, told me a few years ago. He and his co-founder quit their day jobs, and Xcimer went on to raise a $100 million Series A round in 2024. Others joined in on the hype, too — the Fusion Industry Association reports 13 fusion companies that were founded or emerged from stealth between summer 2022 and summer 2023, a record for the sector.
Kritcher told me that none are adhering as closely to NIF’s successful design as Inertia. “There are fundamental technical differences between us and the other laser approaches,” she told me, explaining that while Xcimer and others are using broadly similar methodologies to produce a hot, dense plasma, the underlying physics behind their plan diverges significantly. Xcimer, for instance, is developing a novel laser architecture that hasn’t yet been demonstrated at scale, along with a different fuel capsule design than the one validated by NIF.
Kritcher will be allowed to continue her work at the lab thanks to what the company describes as a “first-of-its-kind agreement” enabled by the 2022 CHIPS and Science Act, which allows scientists at the national labs to participate in commercialization efforts with the goal of accelerating the transfer of knowledge to the private sector.
For the fusion engineer, it’s the ultimate dream come true. She first arrived at Lawrence Livermore as a summer intern in 2004, just before her senior year at the University of Michigan, and “fell in love with the lab and the NIF project,” which was still under construction at the time. She opted to attend the University of California, Berkeley for her masters and PhD in nuclear engineering so that she could continue her work there.
“I was starstruck by the possibility of fusion energy and [it having] such a big impact on humanity, and that really kept me going for a long time,” she told me. But after the NIF facility was finally completed in 2009, it failed to achieve ignition by its initial 2012 target.
By then, Kritcher was a postdoctoral fellow, and attention at NIF began to shift toward supporting the nation’s nuclear stockpile. Fusion energy was “always in the back of my mind, driving me day to day,” she said, “but you sort of forget about it, and you lose a little bit of that excitement and spark.” Under her guidance, NIF ultimately reached that watershed moment, which has since been replicated numerous times. And when it did, "it just reopened all those old inspirational feelings and motivations and excitement and it was like a 180 turning point where we all just go, oh, fusion energy is possible again with this approach.”
Many of the lab’s employees feel similarly, she said, and this close collaboration will allow some of the nation’s foremost experts in inertial confinement to work with the startup across a range of technical capabilities, including “the laser side, the target fabrication side, the simulations team side, the code development side, our physics design side,” Kritcher enumerated.
Inertia is looking to bring its first pilot plant online in the “2030s to 2040s,” she told me. By contrast, Commonwealth Fusion Systems — the most well-capitalized company in the sector — plans to connect its first plant to the grid early next decade, while Xcimer is targeting 2035. Kritcher is unfazed, though. While she acknowledges that other companies might complete their facilities sooner, she argues that Inertia still has an upper hand given that NIF effectively serves as the startup’s demonstration plant, something no other company has built.
Not to mention that all of the sector’s projected timelines remain highly speculative. There are serious technical and economic challenges that would-be fusion energy companies will have to overcome — Inertia not excepted — and the industry’s status 10 years down the line remains anyone’s guess. What’s crystal clear, however, is that a serious new contender has entered the race.