Technology

NVIDIA AI instances costing $2M in power, annually: Liftr Insights data

Published

on

Liftr Insights runs the calculations for 2,000 NVIDIA cards based on its data covering semiconductors and power

AUSTIN, Texas, Aug. 20, 2024 /PRNewswire/ — Liftr Insights, a pioneer in market intelligence driven by unique data, demonstrated the on-going costs for running popular AI workloads from NVIDIA.

$2M in on-going costs in addition to a $33M initial investment in the NVIDIA semiconductors, Liftr data shows

NVIDIA has been dominating news and the markets for its semiconductors. Their latest semiconductors, such as the Hopper H100, are considered essential to the rising demand for AI training and other artificial intelligence processes. Liftr data show that even their earlier types like the A100 remain in high demand.

For example, for a $33M investment in AI accelerator components and $2 M in electric consumption per year, a company could be running 1,000 H100s and 1,000 A100s in Dallas, Texas, which combined could be generating high performance in excess of 44.7 FP64 petaflops.

In comparison, Liftr Insights shows that running the same infrastructure in Houston would generate a larger power bill at approximately $2.1M per year. Annual costs would be less in San Antonio or Austin, at $1.9M and $1.6M, respectively.

“Despite the news of the delay in the Blackwell processes,” says Tab Schadt, CEO of Liftr Insights, “major cloud providers like AWS, Azure, and GCP have been increasing their adoption of the latest NVIDIA semiconductors.”

Liftr Insights, which tracks the top 6 cloud providers and 3 trending providers, shows the adoption growth of the Hopper brand. The H100 is the most widely discussed model within the Hopper series.

In addition to showing common configurations as well as adoption trends by the major consumers, Liftr data can provide deeper insight for data center operators.

“We help our customers understand the impact of these new chips on output and their financials,” says Schadt. “When looking at new AI, it’s more than knowing what’s available. Rather, for specific configurations, it’s understanding the performance, power consumption, and ultimately, the bottom line of on-going costs.”

About Liftr Insights
Liftr Insights generates reliable market intelligence using unique data, including details about configurations, components, deployment geo, and pricing for:

Server processors: Intel Xeon, AMD EPYC, Aliyun Yitian, AWS Graviton, and Ampere Computing AltraDatacenter compute accelerators: GPUs, FPGAs, TPUs, and AI chips from NVIDIA, Xilinx, Intel, AMD, AWS, Google, and Qualcomm.

As shown on the Liftr Cloud Regions Map at https://bit.ly/LiftrCloudRegionsMap, among the companies tracked are Amazon Web ServicesMicrosoft AzureAlibaba CloudGoogle Cloud, Oracle CloudTencent Cloud, CoreWeave, Lambda, and Vultr as well as semiconductor vendors AMD, AmpereIntel, Qualcomm, and NVIDIA. Liftr Insights subject matter experts translate company-specific service provider data into actionable alternative data. 

Liftr and the Liftr logo are registered service marks of Liftr Insights. The following are trademarks and/or service marks of Liftr Insights: Liftr Insights, Cloud Components Tracker, Intelligence Compute Tracker, and Liftr Cloud Regions Map. 

The following are registered intellectual property marks, trademarks, or service marks of their respective companies: 

Amazon Web Services
Microsoft Azure
Alibaba Cloud
Google Cloud
Oracle Cloud
Tencent Cloud
CoreWeave
Lambda
Vultr
Intel Corporation
Ampere Computing
Qualcomm
NVIDIA
AMD
ARM 

View original content to download multimedia:https://www.prnewswire.com/news-releases/nvidia-ai-instances-costing-2m-in-power-annually-liftr-insights-data-302226422.html

SOURCE Liftr Insights

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version