Sign In or Create an Account.

By continuing, you agree to the Terms of Service and acknowledge our Privacy Policy

Technology

What Does OpenAI’s New Breakthrough Mean for Energy Consumption?

Why the new “reasoning” models might gobble up more electricity — at least in the short term

A robot with a smokestack coming out of its head.
Heatmap Illustration/Getty Images

What happens when artificial intelligence takes some time to think?

The newest set of models from OpenAI, o1-mini and o1-preview, exhibit more “reasoning” than existing large language models and associated interfaces, which spit out answers to prompts almost instantaneously.

Instead, the new model will sometimes “think” for as long as a minute or two. “Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes,” OpenAI announced in a blog post last week. The company said these models perform better than their existing ones on some tasks, especially related to math and science. “This is a significant advancement and represents a new level of AI capability,” the company said.

But is it also a significant advancement in energy usage?

In the short run at least, almost certainly, as spending more time “thinking” and generating more text will require more computing power. As Erik Johannes Husom, a researcher at SINTEF Digital, a Norwegian research organization, told me, “It looks like we’re going to get another acceleration of generative AI’s carbon footprint.”

Discussion of energy use and large language models has been dominated by the gargantuan requirements for “training,” essentially running a massive set of equations through a corpus of text from the internet. This requires hardware on the scale of tens of thousands of graphical processing units and an estimated 50 gigawatt-hours of electricity to run.

Training GPT-4 cost “more than” $100 million OpenAI chief executive Sam Altman has said; the next generation models will likely cost around $1 billion, according to Anthropic chief executive Dario Amodei, a figure that might balloon to $100 billion for further generation models, according to Oracle founder Larry Ellison.

While a huge portion of these costs are hardware, the energy consumption is considerable as well. (Meta reported that when training its Llama 3 models, power would sometimes fluctuate by “tens of megawatts,” enough to power thousands of homes). It’s no wonder that OpenAI’s chief executive Sam Altman has put hundreds of millions of dollars into a fusion company.

But the models are not simply trained, they're used out in the world, generating outputs (think of what ChatGPT spits back at you). This process tends to be comparable to other common activities like streaming Netflix or using a lightbulb. This can be done with different hardware and the process is more distributed and less energy intensive.

As large language models are being developed, most computational power — and therefore most electricity — is used on training, Charlie Snell, a PhD student at University of California at Berkeley who studies artificial intelligence, told me. “For a long time training was the dominant term in computing because people weren’t using models much.” But as these models become more popular, that balance could shift.

“There will be a tipping point depending on the user load, when the total energy consumed by the inference requests is larger than the training,” said Jovan Stojkovic, a graduate student at the University of Illinois who has written about optimizing inference in large language models.

And these new reasoning models could bring that tipping point forward because of how computationally intensive they are.

“The more output a model produces, the more computations it has performed. So, long chain-of-thoughts leads to more energy consumption,” Husom of SINTEF Digital told me.

OpenAI staffers have been downright enthusiastic about the possibilities of having more time to think, seeing it as another breakthrough in artificial intelligence that could lead to subsequent breakthroughs on a range of scientific and mathematical problems. “o1 thinks for seconds, but we aim for future versions to think for hours, days, even weeks. Inference costs will be higher, but what cost would you pay for a new cancer drug? For breakthrough batteries? For a proof of the Riemann Hypothesis? AI can be more than chatbots,” OpenAI researcher Noam Brown tweeted.

But those “hours, days, even weeks” will mean more computation and “there is no doubt that the increased performance requires a lot of computation,” Husom said, along with more carbon emissions.

But Snell told me that might not be the end of the story. It’s possible that over the long term, the overall computing demands for constructing and operating large language models will remain fixed or possibly even decline.

While “the default is that as capabilities increase, demand will increase and there will be more inference,” Snell told me, “maybe we can squeeze reasoning capability into a small model ... Maybe we spend more on inference but it’s a much smaller model.”

OpenAI hints at this possibility, describing their o1-mini as “a smaller model optimized for STEM reasoning,” in contrast to other, larger models that “are pre-trained on vast datasets” and “have broad world knowledge,” which can make them “expensive and slow for real-world applications.” OpenAI is suggesting that a model can know less but think more and deliver comparable or better results to larger models — which might mean more efficient and less energy hungry large language models.

In short, thinking might use less brain power than remembering, even if you think for a very long time.

Blue

You’re out of free articles.

Subscribe today to experience Heatmap’s expert analysis 
of climate change, clean energy, and sustainability.
To continue reading
Create a free account or sign in to unlock more free articles.
or
Please enter an email address
By continuing, you agree to the Terms of Service and acknowledge our Privacy Policy
AM Briefing

The Grinch of Offshore Wind

On Google’s energy glow up, transmission progress, and South American oil

Donald Trump.
Heatmap Illustration/Getty Images

Current conditions: Nearly two dozen states from the Rockies through the Midwest and Appalachians are forecast to experience temperatures up to 30 degrees above historical averages on Christmas Day • Parts of northern New York and New England could get up to a foot of snow in the coming days • Bethlehem, the West Bank city south of Jerusalem in which Christians believe Jesus was born, is preparing for a sunny, cloudless Christmas Day, with temperatures around 60 degrees Fahrenheit.

This is our last Heatmap AM of 2025, but we’ll see you all again in 2026!

THE TOP FIVE

1. Trump halts construction on all offshore wind projects

Just two weeks after a federal court overturned President Donald Trump’s Day One executive order banning new offshore wind permits, the administration announced a halt to all construction on seaward turbines. Secretary of the Interior Doug Burgum announced the move Monday morning on X: “Due to national security concerns identified by @DeptofWar, @Interior is PAUSING leases for 5 expensive, unreliable, heavily subsidized offshore wind farms!” As Heatmap’s Jael Holzman explained in her writeup, there are only five offshore wind projects currently under construction in U.S. waters: Vineyard Wind, Revolution Wind, Coastal Virginia Offshore Wind, Sunrise Wind, and Empire Wind. “The Department of War has come back conclusively that the issues related to these large offshore wind programs create radar interference, create genuine risk for the U.S., particularly related to where they are in proximity to our East Coast population centers,” Burgum told Fox Business host Maria Bartiromo.

Keep reading...Show less
Red
Energy

Google Is Cornering the Market on Energy Wonks

The hyperscaler is going big on human intelligence to help power its artificial intelligence.

The Google logo holding electricity.
Heatmap Illustration/Getty Images

Google is on an AI hiring spree — and not just for people who can design chips and build large language models. The tech giant wants people who can design energy systems, too.

Google has invested heavily of late in personnel for its electricity and infrastructure-related teams. Among its key hires is Tyler Norris, a former Duke University researcher and one of the most prominent proponents of electricity demand flexibility for data centers, who started in November as “head of market innovation” on the advanced energy team. The company also hired Doug Lewin, an energy consultant and one of the most respected voices in Texas energy policy, to lead “energy strategy and market design work in Texas,” according to a note he wrote on LinkedIn. Nathan Iyer, who worked on energy policy issues at RMI, has been a contractor for Google Clean Energy for about a year. (The company also announced Monday that it’s shelling out $4.5 billion to acquire clean energy developer Intersect.)

Keep reading...Show less
Yellow
Podcast

The Biggest Energy and Climate Stories of 2026

A lookahead with Heatmap’s own Emily Pontecorvo, Matthew Zeitlin, and Jillian Goodman.

Clean energy.
Heatmap Illustration/Getty Images

2025 has been a rough year for climate and energy news. But enough about that. Let’s start looking at 2026!

On this week’s episode of Shift Key, Rob is joined by some of Heatmap’s writers and editors to discuss our biggest stories and predictions for 2026 — what we’re tracking, what could surprise us, and what could happen next. We also discuss a recent op-ed in The New York Times arguing that Democrats should work more closely with the U.S. oil and gas industry. Today’s panel includes Heatmap’s founding staff writer Emily Pontecorvo, staff writer Matthew Zeitlin, and deputy editor Jillian Goodman.

Keep reading...Show less
Green