Skip to main content

The Growing Appetite of Artificial Intelligence

· 9 min read
Michael Kantor
President, Hashgraph Online DAO

In just the past month, two landmark announcements have signaled how fast the AI infrastructure build-out is accelerating.

  • OpenAI + NVIDIA (Sept 22, 2025): a strategic partnership to deploy at least 10 gigawatts of NVIDIA-powered data centers, tied to up to $100 billion in progressive investment.
  • OpenAI + Oracle (Sept 23, 2025): an agreement to add 4.5 gigawatts of new Stargate capacity across five U.S. sites, part of a $300 billion cloud services deal.

Together, that's 14.5 gigawatts of new planned capacity announced in just 30 days. While these facilities won't switch on overnight, the commitments show how central compute has become to the future of AI.

But that's not all. Microsoft announced a $30 Billion investment in the UK with "$15 billion in capital expenditures to build out the UK’s cloud and AI infrastructure." In Fairwater, Satya noted, "Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times."

This is not a footnote in the AI story. It is one of the main chapters. We are entering a world where control of compute is as strategically important as the algorithms themselves.

What 14.5 GW Actually Means

To understand why this number matters, we need to translate it into something relatable:

  • In 2023, data centers in the United States consumed about 176 terawatt-hours, roughly 4.4 percent of total U.S. electricity usage.
  • Experts estimate that by 2028, data center usage could rise to between 6.7 percent and 12 percent of total U.S. electricity consumption.
  • Globally, the International Energy Agency projects that demand from data centers will more than double by 2030, largely driven by AI workloads.
  • One gigawatt of continuous capacity, using the 2023 U.S. residential average of roughly 10,500 kWh per household per year, can support about 0.86 million homes. Multiply that by 14.5, and the new build-outs could power approximately 12.4 million households.

Even accounting for inefficiencies, cooling losses, and backup systems, the scale is massive. The compute race is becoming part of the physical energy infrastructure.

Residential electricity of major U.S. cities versus new AI compute announcements (September 2025)
New AI compute vs. Residential demand
Residential bar is stacked by the cities that remain visible.
Residential total (visible cities): 14.54 GW
Toggle cities to adjust the residential bar

Sources: ACS 2023 households; EIA 2023 state average monthly residential kWh. AI compute = announced capacity in the past ~30 days (OpenAI-NVIDIA LOI, Stargate, Oracle, Google, Microsoft). May include overlaps.
14.5 GW ~ 12.4M U.S. homes (EIA 2023 average).

Click any city in the legend to include or exclude it from the comparison.

The chart shows that the combined household electricity demand of the 20 largest U.S. cities in the sample is about 14.54 GW. That is nearly the same as the 14.5 GW of new AI compute capacity that companies like OpenAI, NVIDIA, Oracle, Google, and Microsoft contracted in just the past month. In other words, in only 30 days the industry committed to adding as much energy for AI as it takes to power every home in New York, Los Angeles, Houston, Chicago, and every other major city shown here.

Why the Race Is Accelerating Now

Several factors are converging to push this escalation:

  1. Massive AI workloads. Training, fine-tuning, and inference all demand more compute. Latency and scale requirements push providers to build rather than depend solely on external cloud.
  2. Vertical integration and margin control. Companies that control the full stack - power, hardware, cooling, deployment - can optimize costs, differentiate on performance, and reduce dependency on third parties.
  3. Energy as leverage. Control of power grants a strategic edge. Whoever owns the last mile of energy supply, especially in congested grids, can create choke points or cost premiums.
  4. Geopolitical and regulatory alignment. Governments are starting to treat data infrastructure like national infrastructure. Projects like OpenAI's Stargate, built with Oracle and SoftBank, aim to embed compute capacity into the U.S. backbone.
  5. Efficiency and edge innovation. More efficient architectures and compact models can shift workloads off centralized systems and reduce pressure on mega data centers.

Hidden Costs: Environmental, Local, and Grid Impact

The story of AI isn't just about smarter models or bigger data centers. It's also about the ripple effects those data centers are already having on the power grid and the environment.

Researchers at the Lawrence Berkeley National Laboratory reported that U.S. data centers used about 176 terawatt-hours of electricity in 2023, which works out to roughly 4.4 percent of all electricity consumed in the country. The U.S. Department of Energy expects that share to rise to between 6.7 and 12 percent by 2028.

Looking a bit further out, the Department of Energy and the Electric Power Research Institute estimate that data centers could account for as much as 9 percent of U.S. electricity demand by 2030. Both DOE and Berkeley Lab stress the same conclusion: the decades-long era of flat electricity demand in the United States is ending, and data centers are one of the main reasons why.

For communities that host these facilities, the impact is already tangible. Local grids are carrying heavier loads, water systems are under pressure from cooling needs, and neighbors live with the constant hum and heat of around-the-clock operations.

What Low-Compute AI Could Change

Not every path forward for AI requires massive server farms. Research groups are already exploring alternatives. At MIT, the Liquid NANOs project is testing ways to run capable agents directly on phones and laptops instead of in sprawling clusters. The premise is straightforward: if workloads that now demand hyperscale data centers could run locally, our reliance on centralized compute would drop quickly.

We're also seeing a wave of small language models proving just how far efficiency can be pushed. Systems like Gamma 3, Phi-3, and the newest Mistral releases show that you can keep a wide skill set while trimming the energy cost. When models live closer to the user, energy use falls, transmission losses disappear, and the need for cooling and backup infrastructure shrinks. That shift lowers the barrier to entry, giving smaller teams a real chance to compete on design rather than just raw capital.

The hard part is making sure these models are still robust enough to handle real-world conditions: unpredictable latency, messy data, and diverse domains. But the momentum behind on-device intelligence is undeniable. Hashgraph Online is part of the Advanced AI Society, which contributes to the Liquid NANOs effort, working to make efficiency and decentralization first-class requirements in AI standards rather than afterthoughts.


Market signals from recent deals

The AI buildout is not only visible in power and data centers. It is moving stock prices too.

  • NVIDIA + OpenAI.
    • On September 26, 2025, Reuters reported that NVIDIA would invest up to $100 billion in OpenAI to deploy around 10 gigawatts of NVIDIA-based data centers.
    • That day, NVIDIA's stock jumped as much as 4.4% to a record high before closing the session up about 3%.
    • Analysts noted the announcement reinforced expectations that NVIDIA will continue to dominate the supply of advanced compute infrastructure.
  • Google + Cipher Mining.
    • Cipher's stock spiked intraday by nearly 20% and closed up about 5%.
    • Coverage pointed to the move as evidence that hyperscalers are now treating energy access and infrastructure partnerships as strategic necessities.

Together, these moves show that large-model training is not only a technical race but also a capital markets contest. The winners will be the ones who secure the cheapest, cleanest energy and the best colocated infrastructure.

Google is taking a similar approach. In May 2025, it acquired a 5.4 percent stake in Cipher Mining, effectively positioning the Bitcoin miner as a supplier of surplus power and infrastructure for Google's AI workloads (Cointelegraph, 2025). The deal makes it clear how seriously hyperscalers are treating energy: controlling or partnering with power-heavy operators is becoming part of the strategy.

Put together, these moves show that large-model training is not only a technical race but also a capital markets contest. The winners will be the ones who secure the cheapest, cleanest energy and the best colocated infrastructure.

Why This Is a Line in the Sand

The current moment is more than a capacity war. It is a crossroads for power, sovereignty, and agency:

  • Whoever controls compute infrastructure holds a veto on innovation access.
  • Centralization of compute means data often funnels through centralized tollbooths, amplifying surveillance risk.
  • The social and environmental burden may fall disproportionately on communities where data centers cluster, often with little local return.
  • Nations that master energy and compute together gain strategic leverage in AI.

How This Aligns With Hashgraph Online's Mission

At Hashgraph Online, we focus on agent architecture, identity, interoperability, and decentralized standards. The compute war is deeply relevant to that work:

  • If compute remains concentrated, agents must conform to the dominant infrastructure and decentralization becomes much harder.
  • Standard protocols, identity systems, and modular agent designs can help break the lock-in that massive compute holders may try to enforce.
  • Our vision is that agents should be deployable across many compute environments - local, cloud, and hybrid - rather than being forced into one monolithic stack.

We do not claim to have all the answers, but we believe that who controls compute is as critical as what the AI does.

What Builders, Policymakers, and the Community Should Do

  1. Demand transparency. Require disclosure of power usage effectiveness, energy sourcing, grid impact, and local environmental assessments.
  2. Support efficient architectures. Fund research in sparsity, quantization, compact models, and hardware plus software co-design.
  3. Design modular systems. Split workloads between local and remote compute so centralized systems are optional rather than mandatory.
  4. Push for regional regulation and environmental review. New compute deployments should be assessed for grid and local impact, emissions, water use, and local benefits.
  5. Promote standards and interoperability. Encourage open identity, agent messaging protocols, and auditable logs as guardrails that prevent dominance through compute control.
  6. Foster edge ecosystems. Encourage deployment of AI that works well locally so control does not default to centralized actors.

A Pivotal Moment in Computing History

The compute war unfolding right now is not just a contest for infrastructure. It is a contest for who owns the next era of intelligence. The difference between whose wires power your agent and whose policies govern your compute matters far more than we often admit. The decisions we make today about transparency, efficiency, and decentralization will shape the future of AI more than any single model release.