You’re out of free articles.
Log in
To continue reading, log in to your account.
Create a Free Account
To unlock more free articles, please create a free account.
Sign In or Create an Account.
By continuing, you agree to the Terms of Service and acknowledge our Privacy Policy
Welcome to Heatmap
Thank you for registering with Heatmap. Climate change is one of the greatest challenges of our lives, a force reshaping our economy, our politics, and our culture. We hope to be your trusted, friendly, and insightful guide to that transformation. Please enjoy your free articles. You can check your profile here .
subscribe to get Unlimited access
Offer for a Heatmap News Unlimited Access subscription; please note that your subscription will renew automatically unless you cancel prior to renewal. Cancellation takes effect at the end of your current billing period. We will let you know in advance of any price changes. Taxes may apply. Offer terms are subject to change.
Subscribe to get unlimited Access
Hey, you are out of free articles but you are only a few clicks away from full access. Subscribe below and take advantage of our introductory offer.
subscribe to get Unlimited access
Offer for a Heatmap News Unlimited Access subscription; please note that your subscription will renew automatically unless you cancel prior to renewal. Cancellation takes effect at the end of your current billing period. We will let you know in advance of any price changes. Taxes may apply. Offer terms are subject to change.
Create Your Account
Please Enter Your Password
Forgot your password?
Please enter the email address you use for your account so we can send you a link to reset your password:
The world’s biggest, most functional city might also be the most pedestrian-friendly. That’s not a coincidence.

For cities that want to reduce the number of cars, bike lanes are a good place to start. They are cheap, usually city-level authorities can introduce them, and they do not require you to raise taxes on people who own cars. What if you want to do something more radical though? What would a city that genuinely wanted to get the car out of its citizens’ lives in a much bigger way do? A city that wanted to make it possible for most people to live decent lives and be able to get around without needing a car, even without needing to get on a bicycle?
There is only one city on Earth I have ever visited that has truly managed this. But it happens to be the biggest city on the planet: Tokyo, the capital of Japan.
In popular imagination, at least in the West, Tokyo is both incredibly futuristic, and also rather foreign and confusing. Before I first visited, in 2017, I imagined it to be an incredibly hectic place, a noisy, bustling megacity. I was on holiday and trying to escape Nairobi, the rather sprawling, low-height, and green city I was living in at the time, and I picked Tokyo largely because I wanted to get as far away from Africa as I could. I needed a break from the traffic jams, the power cuts, the constant negotiation to achieve anything, and the heat. I was looking for an escape somewhere as different as I could think of, and I wanted to ride trains around and look at high-tech skyscrapers and not worry about getting splattered by mud walking in the street. I was expecting to feel bowled over by the height of the buildings, the sheer crush of people, and the noise.
Yet when I emerged from the train station in Shibuya, blinking jetlagged in the morning light after a night flight from Amsterdam, what actually caught me off guard was not the bustle but rather how quiet the city is. When you see cliched images of Tokyo, what invariably is shown are the enormous crowds of pedestrians crossing the roads, or Mount Fuji in the background of the futuristic skyline. I expected something like Los Angeles in Blade Runner, I suppose — futuristic and overwhelming. From photos, Tokyo can look almost unplanned, with neon signs everywhere and a huge variety of forms of architecture. You expect it to feel messy. What I experienced, however, was a city that felt almost like being in a futuristic village. It is utterly calm, in a way that is actually rather strange.
And it took me a little while to realize why. There is simply no traffic noise. No hooting, no engine noise, not even much of the noise of cars accelerating on tarmac. Because there are so few of them. Most of the time you can walk in the middle of the street, so rare is the traffic. There are not even cars parked at the side of the road. That is not true of all of Tokyo, of course. The expressways are often packed. Occasionally, I was told, particularly when it snows, or during holidays when large numbers of people try to drive out to the countryside, jams form that can trap drivers for whole days. But on most residential streets, traffic is almost nonexistent. Even the relatively few cars that you do see are invariably tiny, quiet vehicles.
Among rich cities, Tokyo has the lowest car use in the world. According to Deloitte, a management consultancy, just 12 percent of journeys are completed by private car. It might surprise you to hear that cycling is actually more popular than driving in Tokyo — it accounts for 17 percent of journeys, though the Japanese do not make as much of a big deal out of it as the Dutch do. But walking and public transport dwarf both sorts of vehicles. Tokyo has the most-used public transport system in the world, with 30 million people commuting by train each day. This may sound rather unpleasant. You have probably seen footage of the most crowded routes at rush hour, when staff literally push people onto the carriages to make space, or read about young women being groped in the crush. It happens, but it is not typical. Most of the trains I rode were busy but comfortable, and I was able to get a seat.
And what makes Tokyo remarkable is that the city was almost entirely built after the original city was mostly flattened by American bombers in the Second World War. Elsewhere in the world, cities built after the war are almost invariably car-dependent. Think of Houston, Texas, which has grown from 300,000 people in the 1950s to 10 times that now. Or England’s tiny version, Milton Keynes, which is the fastest-growing city in the country. Or almost any developing world city. Since the advent of the automobile, architects and urban planners worldwide have found it almost impossible to resist building cities around roads and an assumption that most people will drive. Tokyo somehow managed not to. It rebuilt in a much more human-centric way.
It may come as a surprise that Japan is home to the world’s biggest relatively car-free city. After all, Japan is the country that gave the world Mitsubishi, Toyota, and Nissan, and exports vehicles all over the world. And in fairness, a lot of Japanese people do own cars. Overall car ownership in Japan is about 590 vehicles per 1,000 people, which is less than America’s rate of about 800 per 1,000, but comparable to a lot of European countries. On average, there are 1.06 cars per household. But Tokyo is a big exception. In Tokyo, there are only 0.32 cars per household. Most Japanese car owners live in smaller towns and cities than the capital. The highest rate of car ownership, for example, is in Fukui Prefecture, on the western coast of Honshu, one of Japan’s least densely populated areas.
And car ownership in Japan is falling, unlike almost everywhere else on Earth. Part of the reason is just that the country is getting older and the population is falling. But it is also that more and more people live in Tokyo. Annually, Japan is losing about 0.3 percent of its population, or about half a million people a year. Greater Tokyo, however, with its population of 37 million, is shrinking by less than that, or about 0.1 percent a year. And the prefecture of Tokyo proper, with a population of 14 million, is still growing. The reason is that Tokyo generates the best jobs in Japan, and it is also an increasingly pleasant place to live. You may think of Tokyoites as being crammed into tiny apartments, but in fact, the average home in Tokyo has 65.9 square meters of livable floor space (709 square feet). That is still very small—indeed, it is less than the size of the average home in London, where the figure is 80 square meters. But the typical household in London has 2.7 people living in it. In Tokyo, it is 1.95. So per capita, people in Tokyo actually have more space than Londoners.
Overall in fact, people in Tokyo have one of the highest qualities of life in the world. A 2015 survey by Monocle magazine came to the conclusion that Tokyo is the best city on Earth in which to live, “due to its defining paradox of heart-stopping size and concurrent feeling of peace and quiet.” In 2021 The Economist ranked it fourth, after Wellington and Auckland in New Zealand, and another Japanese city, Osaka. Life expectancy overall is 84 years old, one of the highest levels of any city on the planet. A good part of this has to do with the lack of cars. Air pollution is considerably lower than in any other city of equivalent size anywhere in the world. Typical commutes are, admittedly, often fairly long, at 40 minutes each way. But they are not in awful smoggy car traffic.

So how has Tokyo managed it? Andre Sorensen, a professor of urban planning at the University of Toronto, who published a history of urban planning in Japan, told me that Japan’s history has a lot to do with it. Japan’s urbanization happened a little more like some poorer countries — quickly. At the start of the 20th century, just 15 percent of Japanese people lived in cities. Now 91 percent do, one of the highest rates of urbanization in the entire world. That rapid growth meant that Tokyo’s postwar growth was relatively chaotic. Buildings sprawled out into rice paddies, with sewage connections and power often only coming later. Electricity is still often delivered by overhead wires, not underground cables. And yet somehow this haphazard system manages to produce a relatively coherent city, and one that is much easier to get around on foot or by public transport than by car.
Part of the reason, Sorensen explained to me, is just historical chance. Japanese street layouts traditionally were narrow, much like medieval alleys in Europe. Land ownership was often very fragmented, meaning that house builders had to learn to use small plots in a way that almost never happened in Europe or America. And unlike the governments there, the government in postwar Japan was much more concerned with boosting economic growth by creating power plants and industrial yards than it was with creating huge new boulevards through neighborhoods. So the layouts never changed. According to Sorensen’s research, 35 percent of Japanese streets are not actually wide enough for a car to travel down them. More remarkably still, 86 percent are not wide enough for a car to be able to stop without blocking the traffic behind it.
Yet the much bigger reason for Tokyo’s high quality of life is that Japan does not subsidize car ownership in the way other countries do. In fact, owning a car in Tokyo is rather difficult. For one thing, cars are far more enthusiastically inspected than in America or most of Europe. Cars must be checked by officials every two years to ensure that they are still compliant, and have not been modified. That is true in Britain too, but the cost is higher than what a Ministry of Transport test costs. Even a well-maintained car can cost 100,000 yen to inspect (or around $850). On cars that are older than 10 years, the fees escalate dramatically, which helps to explain why so many Japanese sell their cars relatively quickly, and so many of them end up in East Africa or Southeast Asia. On top of that there is an annual automobile tax of up to 50,000 yen, as well as a 5 percent tax on the purchase. And then gasoline is taxed too, meaning it costs around 160 yen per liter, or about $6 a gallon, less than in much of Europe, but more than Americans accept.
And even if you are willing to pay all of the taxes, you cannot simply go and buy a car in the way that you might in most countries. To be allowed to purchase a car, you have to be able to prove that you have somewhere to park it. This approval is issued by the local police, and is known as a shako shomeisho, or “garage certificate.” Without one, you cannot buy a car. This helps to explain why the Japanese buy so many tiny cars, like the so-called Kei cars. It means they can have smaller garages. Even if the law didn’t exist though, owning a car in Japan without having a dedicated parking space for it would be a nightmare. Under a nationwide law passed in 1957, overnight street parking of any sort is completely illegal. So if you were to somehow buy a car with no place to store it, you could not simply park it on the street, because it would get towed the next morning, and you would get fined 200,000 yen (around $1,700). In fact, most street parking of any sort is illegal. There are a few exceptions, but more than 95 percent of Japanese streets have no street parking at all, even during the day.
This, rather than any beautiful architecture, explains why Tokyo’s streets feel so pleasant to walk down, or indeed to look at. There are no cars filling them up. It also means that land is actually valued properly. If you want to own a car, it means that you also have to own (or at least rent) the requisite land to keep it. In rural areas or smaller towns, this is not a huge deal, because land is relatively cheap, and so a permit might only cost 8,000 to 9,000 yen, or about $75 a month. But in Tokyo, the cost will be at least four times that. Garages in American cities can cost that much too, but in Japan there is no cheap street parking option, as in much of New York or Chicago. Most apartment buildings are constructed without any parking at all, because the developers can use the space more efficiently for housing. Only around 42 percent of condominium buildings have parking spaces for residents. Similarly, even if you own a parking space, it is almost never free to park anywhere you might take your car. Parking in Tokyo typically costs 1,000 yen an hour, or around $8.50.
This is a big disincentive to driving. Sorensen told me that when he lived in Tokyo, some wealthy friends of his owned a top-end BMW, which they replaced every few years, because they were car nuts. But because they did not have anywhere to park it near their home, if they wanted to use it, they had to take public transport (or a taxi) to get to it at its garage. As a result, they simply did not use their car very much. In their day-to- day life, they used the trains, the same as everybody else, or took taxis, because that was cheaper than picking up the car. This sort of thing probably helps to explain why the Japanese, despite relatively high levels of car ownership, do not actually drive very far. Car owners in Japan typically drive around 6,000 kilometers per year. That is about half what the average British car owner drives, and less than a third of what the average American does.
Parking rules are not, however, the limit of what keeps cars out of Tokyo. Arguably, an even bigger reason is how infrastructure has been funded in Japan. That is, by the market, rather than directly by taxes. In the 1950s and ’60s, much like Europe and the United States, Japan began building expressways. But unlike in Europe and America, it was starting from a considerably more difficult place. In 1957, Ralph J. Watkins, an American economist who had been invited to advise the Japanese government, reported that “the roads of Japan are incredibly bad. No other industrial nation has so completely neglected its highway system.” Just 23 percent of roads were paved, including just two-thirds of the only highway linking Osaka, Japan’s historical economic hub, to Tokyo.
But unlike America, the idea of making them free never seemed to cross politicians’ minds, probably because Japan in the postwar era was not the world’s richest country. Capital was not freely available. To build the roads, the national government formed corporations such as the Shuto Kōsoku-dōro Kabushiki-gaisha, or Metropolitan Expressway Company, which was formed in greater Tokyo in 1959. These corporations took out vast amounts of debt, which they had to repay, so that the Japanese taxpayer would not be burdened. That meant that tolls were imposed from the very beginning. The tolls had to cover not just the construction cost, but also maintenance and interest on the loans. Today, to drive on the Shuto Expressway costs from 300 to 1,320 yen, or $2.50 to $11 for a “standard-size” automobile. Overall, tolls in Japan are the most expensive in the world — around three times higher than the level charged on the private autoroutes in France, or on average, about 3,000 yen per 100 kilometers ($22 to drive 62 miles).
What that meant was that, from the beginning, roads did not have an unfair advantage in their competition with other forms of transport. And so in Japan, unlike in almost the entire rest of the rich world, the postwar era saw the construction of enormous amounts of rail infrastructure. Indeed, at a time when America and Britain were nationalizing and cutting their railways to cope with falling demand for train travel, in Japan, the national railway company was pouring investment into the system. The world’s first high-speed railway, the Tokaido Shinkansen, was opened in 1964 to coincide with the Tokyo Olympics, with a top speed of 210 kilometers per hour. That was almost double what trains elsewhere mostly managed. From 1964 to 1999, the number of passengers using the Shinkansen grew from 11 million annually to more than 300 million.
Sorensen told me about how in the 1950s and ’60s, the trains were a huge point of national pride for the Japanese government, a bit like car industries were elsewhere. “And justifiably! It was a fantastic invention. To say we can make electric rail go twice as fast. What an achievement.” Thanks to that, the railways ministry became a huge power center in government, rather than a neglected backwater as it often had become elsewhere. In rail, the Japanese “built up expertise in engineering, in bureaucratic resources and capacities, and political clout that just lasted,” he told me. “Whereas the road-building sector was weak.” Elsewhere, building roads became a self-reinforcing process, because as more was poured into constructing them, more people bought cars and demanded more roads. That did not happen in Japan. Instead, the growth in railway infrastructure led to growth in, well, more railway infrastructure.
If you visit Tokyo now, what you will find is that the most hectic, crowded places in the city are all around the train and subway stations. The reason is that Japan’s railway companies (the national firm was privatized in the 1980s) do not only provide railways. They are also big real estate investors. A bit like the firm that built the Metropolitan Railway in the 1930s in Britain, when Japan’s railway firms expanded service, they paid for it by building on the land around the stations. In practice, what that means is that they built lots of apartments, department stores, and supermarkets near (and directly above) railway stations, so that people can get straight off the train and get home quickly. That makes the trains more efficient, because people can get where they need to go without having to walk or travel to and from stations especially far. But it also means that the railways are incredibly profitable, because unlike in the West, they are able to profit from the improvement in land value that they create.
What this adds up to is that Tokyo is one of very few cities on Earth where travel by car is not actively subsidized, and funnily neither is public transport, and yet both work well, when appropriate. However, Tokyo is not completely alone. Several big cities across Asia have managed to avoid the catastrophe (cartastrophe?) that befell much of the western world. Hong Kong manages it nearly as well as Tokyo; there are just 76 cars per 1,000 people in the city state. So too does Singapore, with around 120 per 1,000 people. What those cities have in common, which makes them rather different from Japan, is a shortage of land and a relentless, centralized leadership that recognized early on that cars were a waste of space.
Unfortunately, replicating the Asian model in countries in Europe, America, or Australia from scratch will not be easy. We are starting with so many cars on our roads to begin with, that imposing the sorts of curbs on car ownership that I listed above is almost certainly a political nonstarter. Just look at what happens when politicians in America or Britain try to take away even a modest amount of street parking, or increase the tax on gasoline. People are already invested in cars, sadly. And thanks to that, there is also a chicken-and-egg problem. Because people are invested in cars, they live in places where the sort of public transport that makes life possible for the majority of people in Tokyo is simply not realistic. As it is, constructing rail infrastructure like Japan’s is an extraordinarily difficult task. Look at the difficulties encountered in things like building Britain’s new high-speed train link, or California’s, for example.
And yet it is worth paying attention to Tokyo precisely because it shows that vast numbers of cars are not necessary to daily life. What Tokyo shows is that it is possible for enormous cities to work rather well without being overloaded by traffic congestion. Actually, Tokyo works better than big cities anywhere else. That is why it has managed to grow so large. The trend all over the world for decades now has been toward greater wealth concentrating in the biggest metropolises. The cost of living in somewhere like New York, London, or Paris used to be marginally higher than living in a more modest city. That is no longer the case. And it reflects the fact that the benefits of living in big cities are enormous. The jobs are better, but so too are the restaurants, the cultural activities, the dating opportunities, and almost anything else you can think of. People are willing to pay for it. The high cost of living is a price signal — that is, the fact that people are willing to pay it is an indicator of the value they put on it.
Especially in this post-pandemic era where many jobs can be done from anywhere, lots of New Yorkers could easily decamp to, say, a pretty village upstate, and save a fortune in rent, or cash in on their property values. Actually, hundreds of thousands do every year (well, not only to upstate). But they are replaced by newcomers for the simple reason that New York City is, if you set aside the cost, a pretty great place to live. And yet, if everyone who would like to live in a big city is to be able to, those cities need to be able to grow more. But if they continue to grow with the assumption that the car will be the default way of getting around for a significant proportion of residents, then they will be strangled by congestion long before they ever reach anything like Tokyo’s success. People often say that London or New York are too crowded, but they are wrong. They are only too crowded if you think that it is normal for people to need space not just for them but also for the two tons of metal that they use to get around.
The sheer anger of motorists might mean that banning overnight parking on residential streets proves difficult. But if we want to be bold, some of Tokyo’s other measures are more realistic. We could, for example, do a lot more to build more housing around public transport, and use the money generated to help contribute to the network. According to the Centre for Cities, a British think tank, there are 47,000 hectares of undeveloped land (mostly farmland) within a 10-minute walk of a railway station close to London or another big city. That is enough space to build two million homes, more than half of which would be within a 45-minute commute to or from London. The reason we do not develop the land at the moment is because it is mostly Metropolitan Green Belt, a zoning restriction created in the late 1940s by the Town and Country Planning Act intended to contain cities and stop them sprawling outward. But the problem with it as it works in Britain at the moment is that it does not stop sprawl — it just pushes it further away from cities, into places where there really is no hope of not using a car.
Developing the green belt too would not be popular. People have an affection for fields near their homes, and they do not necessarily want the trains they use to be even more crowded. But there are projects that show it is possible to overcome NIMBYism. In Los Angeles in 2016, voters approved the Transit Oriented Communities Incentive Program, which creates special zoning laws in areas half a mile from a major transit stop (typically, in L.A., a light rail station). This being Los Angeles, it is fairly modest. One of the rules is that the mandatory parking minimums applied are restricted to a maximum of 0.5 car parking spaces per bedroom, and total parking is not meant to exceed more than one space per apartment, which is still rather a lot of parking. But nonetheless, it does allow developers to increase the density of homes near public transport, and it has encouraged developers to build around 20,000 new homes near public transport that probably would not have been constructed otherwise. These are small but real improvements.
Ultimately, no city will be transformed into Tokyo overnight, nor should any be, at least unless a majority of the population decides that they would like it. I am trying to persuade them; for now, not everyone is as enamored with the Japanese capital as I am. But NIMBYism and other political problems can be gradually overturned, if the arguments are made in the right way, even in the most automotive cities.
This article was excerpted from Daniel Knowles’ book Carmageddon: How Cars Make Life Worse and What to Do About It, published by Abrams Press ©2023.
Log in
To continue reading, log in to your account.
Create a Free Account
To unlock more free articles, please create a free account.
Current conditions: A cluster of storms from Sri Lanka to Southeast Asia triggered floods that have killed more than 900 so far • A snowstorm stretching 1,200 miles across the northern United States blanketed parts of Iowa, Illinois, and South Dakota with the white stuff • In China, 31 weather stations broke records for heat on Sunday.
The in-house market monitor at the PJM Interconnection filed a complaint last week to the Federal Energy Regulatory Commission urging the agency to ban the nation’s largest grid operator from connecting any new data centers that the system can’t reliably serve. The warning from the PJM ombudsman comes as the grid operator is considering proposals to require blackouts during periods when there’s not enough electricity to meet data centers’ needs. The grid operator’s membership voted last month on a way forward, but no potential solution garnered enough votes to succeed, Heatmap’s Matthew Zeitlin wrote. “That result is not consistent with the basic responsibility of PJM to maintain a reliable grid and is therefore not just and reasonable,” Monitoring Analytics said, according to Utility Dive.
The push comes as residential electricity prices continue climbing. Rates for American households spiked by an average of 7.4% in September compared to the same month in 2024, according to new data from the Energy Information Administration.

The Environmental Protection Agency made some big news on Wednesday, just before much of the U.S. took off for Thanksgiving: It’s delaying a rule that would have required oil and gas companies to start reducing how much methane, a potent greenhouse gas, is released from their operations into the atmosphere. The regulation would have required oil and gas companies to start reducing how much methane, a potent greenhouse gas, is released from their operations into the atmosphere. Drillers were supposed to start tracking emissions this year. But the Trump administration is instead giving companies until January 2027 as it considers repealing the measure altogether.
The New York Power Authority, the nation’s second largest government-owned utility after the federal Tennessee Valley Authority, is staffing up in preparation for its push to build at least a gigawatt of new nuclear power generation. On Monday morning, NYPA named Todd Josifovski as its new senior vice president of nuclear energy development, tasking the veteran atomic power executive with charting the strategic direction and development of new reactor projects. Josifovski previously hailed from Ontario Power Generation, the state-owned utility in the eponymous Canadian province, which is building what is likely to be North America’s first small modular reactor project. (As Matthew wrote when NYPA first announced its plans for a new nuclear plant, the approach mirrors Ontario’s there.) NYPA is also adding Christopher Hanson, a former member of the Nuclear Regulatory Commission whom President Donald Trump abruptly fired from the federal agency this summer, as a senior consultant in charge of guiding federal financing and permitting.
The push comes as New York’s statewide grid reaches “an inflection point” as surging demand, an aging fleet, and a lack of dispatchable power puts the system at risk, according to the latest reliability report. “The margin for error is extremely narrow, and most plausible futures point to significant reliability shortfalls within the next ten years,” the report concluded. “Depending on demand growth and retirement patterns, the system may need several thousand megawatts of new dispatchable generation over that timeframe.”
Sign up to receive Heatmap AM in your inbox every morning:
Zillow, the country’s largest real estate site, removed a feature from more than a million listings that showed the risks from extreme weather, The New York Times reported. The website had started including climate risk scores last year, using data from the risk-modeling company First Street. But real estate agents complained that the ratings hurt sales, and homeowners protested that there was no way to challenge the scores. Following a complaint from the California Regional Multiple Listing Service, which operates a private database of brokers and agents, Zillow stopped displaying the scores.
The European Commission unveiled a new plan to replace fossil fuels in Europe’s economy with trees. By adopting the so-called Bioeconomy Strategy, released Thursday, the continent aims to remove fossil fuels in products Politico listed as “plastics, building materials, chemicals, and fibers” with organic materials that regrow, such as trees and crops. Doing so, the bloc argued, will help to preserve Europe’s “strategic autonomy” by making the continent less dependent on imported fuels.
Canada, meanwhile, is plowing ahead with its plans to strengthen itself against the U.S. by turning into an energy superpower. Already, the Trans Mountain pipeline is earning the federal coffers nearly $1.3 billion, based on my back-of-the-napkin conversion of the Canadian loonies cited in this Globe and Mail story to U.S. dollars. Now Prime Minister Mark Carney’s government is pitching a new pipeline from Alberta to the West Coast for export to Asia, as the Financial Times reported.
Swapping bunker fuel-burning engines for nuclear propulsion units in container ships could shave up to $68 million off annual shipping expenses, a new report found. If small modular reactors designed to power a cargo vessel are commercialized within four years as expected, the shipping companies could eliminate $50 million in fuel costs each year and about $18 million in carbon penalties. That’s according to data from Lloyd’s Register and LucidCatalyst report for the Singaporean maritime services company Seaspan Corporation.
If it turns out to be a bubble, billions of dollars of energy assets will be on the line.
The data center investment boom has already transformed the American economy. It is now poised to transform the American energy system.
Hyperscalers — including tech giants such as Microsoft and Meta, as well as leaders in artificial intelligence like OpenAI and CoreWeave — are investing eyewatering amounts of capital into developing new energy resources to feed their power-hungry data infrastructure. Those data centers are already straining the existing energy grid, prompting widespread political anxiety over an energy supply crisis and a ratepayer affordability shock. Nothing in recent memory has thrown policymakers’ decades-long underinvestment in the health of our energy grid into such stark relief. The commercial potential of next-generation energy technologies such as advanced nuclear, batteries, and grid-enhancing applications now hinge on the speed and scale of the AI buildout.
But what happens if the AI boom buffers and data center investment collapses? It is not idle speculation to say that the AI boom rests on unstable financial foundations. Worse, however, is the fact that as of this year, the tech sector’s breakneck investment into data centers is the only tailwind to U.S. economic growth. If there is a market correction, there is no other growth sector that could pick up the slack.
Not only would a sudden reversal in investor sentiment make stranded assets of the data centers themselves, which will lose value as their lease revenue disappears, it also threatens to strand all the energy projects and efficiency innovations that data center demand might have called forth.
If the AI boom does not deliver, we need a backup plan for energy policy.
An analysis of the capital structure of the AI boom suggests that policymakers should be more concerned about the financial fundamentals of data centers and their tenants — the tech companies that are buoying the economy. My recent report for the Center for Public Enterprise, Bubble or Nothing, maps out how the various market actors in the AI sector interact, connecting the market structure of the AI inference sector to the economics of Nvidia’s graphics processing units, the chips known as GPUs that power AI software, to the data center real estate debt market. Spelling out the core financial relationships illuminates where the vulnerabilities lie.

First and foremost: The business model remains unprofitable. The leading AI companies ― mostly the leading tech companies, as well as some AI-specific firms such as OpenAI and Anthropic ― are all competing with each other to dominate the market for AI inference services such as large language models. None of them is returning a profit on its investments. Back-of-the-envelope math suggests that Meta, Google, Microsoft, and Amazon invested over $560 billion into AI technology and data centers through 2024 and 2025, and have reported revenues of just $35 billion.
To be sure, many new technology companies remain unprofitable for years ― including now-ubiquitous firms like Uber and Amazon. Profits are not the AI sector’s immediate goal; the sector’s high valuations reflect investors’ assumptions about future earnings potential. But while the losses pile up, the market leaders are all vying to maximize the market share of their virtually identical services ― a prisoner’s dilemma of sorts that forces down prices even as the cost of providing inference services continues to rise. Rising costs, suppressed revenues, and fuzzy measurements of real user demand are, when combined, a toxic cocktail and a reflection of the sector’s inherent uncertainty.
Second: AI companies have a capital investment problem. These are not pure software companies; to provide their inference services, AI companies must all invest in or find ways to access GPUs. In mature industries, capital assets have predictable valuations that their owners can borrow against and use as collateral to invest further in their businesses. Not here: The market value of a GPU is incredibly uncertain and, at least currently, remains suppressed due to the sector’s competitive market structure, the physical deterioration of GPUs at high utilization rates, the unclear trajectory of demand, and the value destruction that comes from Nvidia’s now-yearly release of new high-end GPU models.
The tech industry’s rush to invest in new GPUs means existing GPUs lose market value much faster. Some companies, particularly the vulnerable and debt-saddled “neocloud” companies that buy GPUs to rent their compute capacity to retail and hyperscaler consumers, are taking out tens of billions of dollars of loans to buy new GPUs backed by the value of their older GPU stock; the danger of this strategy is obvious. Others including OpenAI and xAI, having realized that GPUs are not safe to hold on one’s balance sheet, are instead renting them from Oracle and Nvidia, respectively.
To paper over the valuation uncertainty of the GPUs they do own, all the hyperscalers have changed their accounting standards for GPU valuations over the past few years to minimize their annual reported depreciation expenses. Some financial analysts don’t buy it: Last year, Barclays analysts judged GPU depreciation as risky enough to merit marking down the earnings estimates of Google (in this case its parent company, Alphabet), Microsoft, and Meta as much as 10%, arguing that consensus modeling was severely underestimating the earnings write-offs required.
Under these market dynamics, the booming demand for high-end chips looks less like a reflection of healthy growth for the tech sector and more like a scramble for high-value collateral to maintain market position among a set of firms with limited product differentiation. If high demand projections for AI technologies come true, collateral ostensibly depreciates at a manageable pace as older GPUs retain their marketable value over their useful life — but otherwise, this combination of structurally compressed profits and rapidly depreciating collateral is evidence of a snake eating its own tail.
All of these hyperscalers are tenants within data centers. Their lack of cash flow or good collateral should have their landlords worried about “tenant churn,” given the risk that many data center tenants will have to undertake multiple cycles of expensive capital expenditure on GPUs and network infrastructure within a single lease term. Data center developers take out construction (or “mini-perm”) loans of four to six years and refinance them into longer-term permanent loans, which can then be packaged into asset-backed and commercial mortgage-backed securities to sell to a wider pool of institutional investors and banks. The threat of broken leases and tenant vacancies threatens the long-term solvency of the leading data center developers ― companies like Equinix and Digital Realty ― as well as the livelihoods of the construction contractors and electricians they hire to build their facilities and manage their energy resources.
Much ink has already been spilled on how the hyperscalers are “roundabouting” each other, or engaging in circular financing: They are making billions of dollars of long-term purchase commitments, equity investments, and project co-development agreements with one another. OpenAI, Oracle, CoreWeave, and Nvidia are at the center of this web. Nvidia has invested $100 billion in OpenAI, to be repaid over time through OpenAI’s lease of Nvidia GPUs. Oracle is spending $40 billion on Nvidia GPUs to power a data center it has leased for 15 years to support OpenAI, for which OpenAI is paying Oracle $300 billion over the next five years. OpenAI is paying CoreWeave over the next five years to rent its Nvidia GPUs; the contract is valued at $11.9 billion, and OpenAI has committed to spending at least $4 billion through April 2029. OpenAI already has a $350 million equity stake in CoreWeave. Nvidia has committed to buying CoreWeave’s unsold cloud computing capacity by 2032 for $6.3 billion, after it already took a 7% stake in CoreWeave when the latter went public. If you’re feeling dizzy, count yourself lucky: These deals represent only a fraction of the available examples of circular financing.
These companies are all betting on each others’ growth; their growth projections and purchase commitments are all dependent on their peers’ growth projections and purchase commitments. Optimistically, this roundabouting represents a kind of “risk mutualism,” which, at least for now, ends up supporting greater capital expenditures. Pessimistically, roundabouting is a way for these companies to pay each other for goods and services in any way except cash — shares, warrants, purchase commitments, token reservations, backstop commitments, and accounts receivable, but not U.S. dollars. The second any one of these companies decides it wants cash rather than a commitment is when the music stops. Chances are, that company needs cash to pay a commitment of its own, likely involving a lender.
Lenders are the final piece of the puzzle. Contrary to the notion that cash-rich hyperscalers can finance their own data center buildout, there has been a record volume of debt issuance this year from companies such as Oracle and CoreWeave, as well as private credit giants like Blue Owl and Apollo, which are lending into the boom. The debt may not go directly onto hyperscalers’ balance sheets, but their purchase commitments are the collateral against which data center developers, neocloud companies like CoreWeave, and private credit firms raise capital. While debt is not inherently something to shy away from ― it’s how infrastructure gets built ― it’s worth raising eyebrows at the role private credit firms are playing at the center of this revenue-free investment boom. They are exposed to GPU financing and to data center financing, although not the GPU producers themselves. They have capped upside and unlimited downside. If they stop lending, the rest of the sector’s risks look a lot more risky.

A market correction starts when any one of the AI companies can’t scrounge up the cash to meet its liabilities and can no longer keep borrowing money to delay paying for its leases and its debts. A sudden stop in lending to any of these companies would be a big deal ― it would force AI companies to sell their assets, particularly GPUs, into a potentially adverse market in order to meet refinancing deadlines. A fire sale of GPUs hurts not just the long-term earnings potential of the AI companies themselves, but also producers such as Nvidia and AMD, since even they would be selling their GPUs into a soft market.
For the tech industry, the likely outcome of a market correction is consolidation. Any widespread defaults among AI-related businesses and special purpose vehicles will leave capital assets like GPUs and energy technologies like supercapacitors stranded, losing their market value in the absence of demand ― the perfect targets for a rollup. Indeed, it stands to reason that the tech giants’ dominance over the cloud and web services sectors, not to mention advertising, will allow them to continue leading the market. They can regain monopolistic control over the remaining consumer demand in the AI services sector; their access to more certain cash flows eases their leverage constraints over the longer term as the economy recovers.
A market correction, then, is hardly the end of the tech industry ― but it still leaves a lot of data center investments stranded. What does that mean for the energy buildout that data centers are directly and indirectly financing?
A market correction would likely compel vertically integrated utilities to cancel plans to develop new combined-cycle gas turbines and expensive clean firm resources such as nuclear energy. Developers on wholesale markets have it worse: It’s not clear how new and expensive firm resources compete if demand shrinks. Grid managers would have to call up more expensive units less frequently. Doing so would constrain the revenue-generating potential of those generators relative to the resources that can meet marginal load more cheaply — namely solar, storage, peaker gas, and demand-response systems. Combined-cycle gas turbines co-located with data centers might be stranded; at the very least, they wouldn’t be used very often. (Peaker gas plants, used to manage load fluctuation, might still get built over the medium term.) And the flight to quality and flexibility would consign coal power back to its own ash heaps. Ultimately, a market correction does not change the broader trend toward electrification.
A market correction that stabilizes the data center investment trajectory would make it easier for utilities to conduct integrated resource planning. But it would not necessarily simplify grid planners’ ability to plan their interconnection queues — phantom projects dropping out of the queue requires grid planners to redo all their studies. Regardless of the health of the investment boom, we still need to reform our grid interconnection processes.
The biggest risk is that ratepayers will be on the hook for assets that sit underutilized in the absence of tech companies’ large load requirements, especially those served by utilities that might be building power in advance of committed contracts with large load customers like data center developers. The energy assets they build might remain useful for grid stability and could still participate in capacity markets. But generation assets built close to data center sites to serve those sites cheaply might not be able to provision the broader energy grid cost-efficiently due to higher grid transport costs incurred when serving more distant sources of load.
These energy projects need not be albatrosses.
Many of these data centers being planned are in the process of securing permits and grid interconnection rights. Those interconnection rights are scarce and valuable; if a data center gets stranded, policymakers should consider purchasing those rights and incentivizing new businesses or manufacturing industries to build on that land and take advantage of those rights. Doing so would provide offtake for nearby energy assets and avoid displacing their costs onto other ratepayers. That being said, new users of that land may not be able to pay anywhere near as much as hyperscalers could for interconnection or for power. Policymakers seeking to capture value from stranded interconnection points must ensure that new projects pencil out at a lower price point.
Policymakers should also consider backstopping the development of critical and innovative energy projects and the firms contracted to build them. I mean this in the most expansive way possible: Policymakers should not just backstop the completion of the solar and storage assets built to serve new load, but also provide exigent purchase guarantees to the firms that are prototyping the flow batteries, supercapacitors, cooling systems, and uninterruptible power systems that data center developers are increasingly interested in. Without these interventions, a market correction would otherwise destroy the value of many of those projects and the earnings potential of their developers, to say nothing of arresting progress on incredibly promising and commercializable technologies.
Policymakers can capture long-term value for the taxpayer by making investments in these distressed projects and developers. This is already what the New York Power Authority has done by taking ownership and backstopping the development of over 7 gigawatts of energy projects ― most of which were at risk of being abandoned by a private sponsor.
The market might not immediately welcome risky bets like these. It is unclear, for instance, what industries could use the interconnection or energy provided to a stranded gigawatt-scale data center. Some of the more promising options ― take aluminum or green steel ― do not have a viable domestic market. Policy uncertainty, tariffs, and tax credit changes in the One Big Beautiful Bill Act have all suppressed the growth of clean manufacturing and metals refining industries like these. The rest of the economy is also deteriorating. The fact that the data center boom is threatened by, at its core, a lack of consumer demand and the resulting unstable investment pathways is itself an ironic miniature of the U.S. economy as a whole.
As analysts at Employ America put it, “The losses in a [tech sector] bust will simply be too large and swift to be neatly offset by an imminent and symmetric boom elsewhere. Even as housing and consumer durables ultimately did well following the bust of the 90s tech boom, there was a one- to two-year lag, as it took time for long-term rates to fall and investors to shift their focus.” This is the issue with having only one growth sector in the economy. And without a more holistic industrial policy, we cannot spur any others.
Questions like these ― questions about what comes next ― suggest that the messy details of data center project finance should not be the sole purview of investors. After all, our exposure to the sector only grows more concentrated by the day. More precisely mapping out how capital flows through the sector should help financial policymakers and industrial policy thinkers understand the risks of a market correction. Political leaders should be prepared to tackle the downside distributional challenges raised by the instability of this data center boom ― challenges to consumer wealth, public budgets, and our energy system.
This sparkling sector is no replacement for industrial policy and macroeconomic investment conditions that create broad-based sources of demand growth and prosperity. But in their absence, policymakers can still treat the challenge of a market correction as an opportunity to think ahead about the nation’s industrial future.
With more electric heating in the Northeast comes greater strains on the grid.
The electric grid is built for heat. The days when the system is under the most stress are typically humid summer evenings, when air conditioning is still going full blast, appliances are being turned on as commuters return home, and solar generation is fading, stretching the generation and distribution grid to its limits.
But as home heating and transportation goes increasingly electric, more of the country — even some of the chilliest areas — may start to struggle with demand that peaks in the winter.
While summer demand peaks are challenging, there’s at least a vision for how to deal with them without generating excessive greenhouse gas emissions — namely battery storage, which essentially holds excess solar power generated in the afternoon in reserve for the evening. In states with lots of renewables on the grid already, like California and Texas, storage has been helping smooth out and avoid reliability issues on peak demand days.
The winter challenge is that you can have long periods of cold weather and little sun, stressing every part of the grid. The natural gas production and distribution systems can struggle in the cold with wellheads freezing up and mechanical failure at processing facilities, just as demand for home heating soars, whether provided by piped gas or electricity generated from gas-fired power plants.
In its recent annual seasonal reliability assessment, the North American Reliability Corporation, a standard-setting body for grid operators, found that “much of North America is again at an elevated risk of having insufficient energy supplies” should it encounter “extreme operating conditions,” i.e. “any prolonged, wide-area cold snaps.”
NERC cited growing electricity demand and the difficulty operating generators in the winter, especially those relying on natural gas. In 2021, Winter Storm Uri effectively shut down Texas’ grid for several days as generation and distribution of natural gas literally froze up while demand for electric heating soared. Millions of Texans were left exposed to extreme low temperatures, and at least 246 died as a result.
Some parts of the country already experience winter peaks in energy demand, especially places like North Carolina and Oregon, which “have winters that are chilly enough to require some heating, but not so cold that electric heating is rare,” in the words of North Carolina State University professor Jeremiah Johnson. "Not too many Mainers or Michiganders heat their homes with electricity,” he said.
But that might not be true for long.
New England may be cold and dark in the winter, but it’s liberal all year round. That means the region’s constituent states have adopted aggressive climate change and decarbonization goals that will stretch their available renewable resources, especially during the coldest days, weeks, and months.
The region’s existing energy system already struggles with winter. New England’s natural gas system is limited by insufficient pipeline capacity, so during particularly cold days, power plants end up burning oil as natural gas is diverted from generating electricity to heating homes.
New England’s Independent System Operator projects that winter demand will peak at just above 21 gigawatts this year — its all-time winter peak is 22.8 gigawatts, summer is 28.1 — which ISO-NE says the region is well-prepared for, with 31 gigawatts of available capacity. That includes energy from the Vineyard Wind offshore wind project, which is still facing activist opposition, as well as imported hydropower from Quebec.
But going forward, with Massachusetts aiming to reduce emissions 50% by 2030 (though state lawmakers are trying to undo that goal) and reach net-zero emissions by 2050 — and nearly the entire region envisioning at least 80% emissions reductions by 2050 — that winter peak is expected to soar. The non-carbon-emitting energy generation necessary to meet that demand, meanwhile, is still largely unbuilt.
By the mid 2030s, ISO-NE expects its winter peak to surpass its summer peak, with peak demand perhaps reaching as high as 57 gigawatts, more than double the system’s all-time peak load. Those last few gigawatts of this load will be tricky — and expensive — to serve. ISO-NE estimates that each gigawatt from 51 to 57 would cost $1.5 billion for transmission expansion alone.
ISO-NE also found that “the battery fleet may be depleted quickly and then struggle to recharge during the winter months,” which is precisely when “batteries may be needed most to fill supply gaps during periods of high demand due to cold weather, as well as periods of low production from wind and solar resources.” Some 600 megawatts of battery storage capacity has come online in the last decade in ISO-NE, and there are state mandates for at least 7 more gigawatts between 2030 and 2033.
There will also be a “continued need for fuel-secure dispatchable resources” through 2050, ISO-NE has found — that is, something to fill the role that natural gas, oil, and even coal play on the coldest days and longest cold stretches of the year.
This could mean “vast quantities of seasonal storage,” like 100-hour batteries, or alternative fuels like synthetic natural gas (produced with a combination of direct air capture and electrolysis, all powered by carbon-free power), hydrogen, biodiesel, or renewable diesel. And this is all assuming a steady buildout of renewable power — including over a gigawatt per year of offshore wind capacity added through 2050 — that will be difficult if not impossible to accomplish given the current policy and administrative roadblocks.
While planning for the transmission and generation system of 2050 may be slightly fanciful, especially as the climate policy environment — and the literal environment — are changing rapidly, grid operators in cold regions are worried about the far nearer term.
From 2027 to 2032, ISO-NE analyses “indicate an increasing energy shortfall risk profile,” said ISO-NE planning official Stephen George in a 2024 presentation.
“What keeps me up at night is the winter of 2032,” Richard Dewey, chief executive of the neighboring New York Independent System Operator, said at a 2024 conference. “I don’t know what fills that gap in the year 2032.”