Learn
Is Nvidia Still in the Early Innings?
By Daniel Morgan, Synovus Trust Senior Portfolio Manager
With the top tech players (Amazon, Google, Meta and Microsoft) all projected to spend collectively up to $185 billion in capital expenditures (CapEx) in 2025, a large portion of those dollars will be dedicated toward the AI Data Center. This is where Nvidia’s (NVDA) core graphics processing unit (GPU) chips are used to accelerate and process artificial intelligence (AI) applications across the network. Nvidia today accounts for more than 70% of AI semiconductors sales to Amazon, Google, Meta and Microsoft.
NVDA recently reported 1Q2025 revenue and an earnings per share (EPS) of $26 billion/$6.12 versus a consensus of $24.6 billion/$5.60, with upside from higher Data Center revenue and higher margins. Total revenue grew +262% versus 242% estimates and compared to last year’s $7 billion negative 13% Year over Year (YoY). Key drivers of growth include increasing adoption and improving availability of new GPU products across cloud, enterprise, internet and sovereign customers, and an increasing mix of software and hardware systems. Data Center revenue (87% of total) was $22.6 billion versus consensus of $21.3 billion and compared to $4.3 billion a year ago, with upside from continued strong demand and improving availability of the Hopper 100 (H100) GPU. Cloud customers were 45% of DC segment total vs. above 50% in the last few quarters, as usage is broadening out to enterprise and internet customers. Management called out large “AI Factories” being built by Tesla (TSLA) and META, as well as 100 other companies. Sovereign AI clouds are being constructed in many countries that want their own large language models (LLM) trained on regional data, including Japan, France, Italy and Singapore, which are expected to spend an aggregated $8 billion in revenue in for FY2025.
Going into the 1Q2025 print there was much speculation that a “air pocket” may form this summer. This hunch was based on a Financial Times article that reported that Amazon Web Services (AWS), the world's largest cloud services firm, had "fully transitioned" its previous orders for Nvidia's Grace Hopper chip to its newer Blackwell graphics processing units that Nvidia announced last March. That rumor was put to rest when Nvidia guided 2Q2025 revenues of $28 billion +/- 2%, which indicates YoY top-line growth of nearly 110%.
The data center supply chain continues to ramp very rapidly. NVIDIA continues to be extremely responsive to the surging demand, having grown the data center business nearly four times over the course of the past four quarters. Demand for the Hopper GPU, which has been on the market for approximately two years, remains strong as chips stay in allocation mode. Supply for Hopper GPU has increased Quarter over Quarter (QoQ) but still has yet to catch up with demand; this dynamic is expected to continue through the year. While wait times and allocation have improved, do not expect demand slipping from the Hopper (H100) in anticipation of the new Blackwell series (B100, B200, GB200). Customers are wary of taking themselves out of line for Hopper allocation in fears that supply will be further limited for the Blackwell product. Furthermore, customers also expect that if they do receive an allocation of B100 chips, then it will most likely not be the full order amount. NVDA’s Data Center segment now generates more revenues than both Intel Corp (INTC) and Advanced Micro Devices’ (AMD) combined data center units!! NVDA’s data center business is expected to nearly double its revenues in FY2025 to $96.2 billion from $47.5 billion in FY2024, with AI being the major catalyst!
It appears that generative AI spend in the data center cloud space is taking priority over the transition to newer, more expensive platforms, as general CapEx in cloud slows. Cloud customers are rapidly shifting budgets toward AI, so this strength is coming at the expense of traditional servers and other legacy areas. There are obviously going to be areas where new servers are needed but think that customers are extending the lives of server CPUs, delaying transition to newer, more expensive Sapphire Rapids and Genoa platforms to make room for mission critical investments in AI. For example, CapEx spend rates in the data center space lead to all the top IaaS cloud units to post an acceleration in revenue growth rates in the 1Q2024: AWS, Azure and GPC. It has been estimated that approximately that 65% of NVDA’s overall data center revenues are driven by generative AI, with 90% of that coming from deep learning and 10% from inference.
How does the Competition Stack up for AI Chips? | ||
Company Name | AI Chip | Type |
Nvidia | Hopper H100 (*H800) | GPU |
Nvidia | Grace Hopper GH200 | GPU |
Nvidia | Blackwell B100 | GPU |
Nvidia | Blackwell B200 | GPU |
Nvidia | Blackwell GB200 | GPU |
Nvidia | A100 (*A800) | GPU |
Nvidia | *H20, *L20, *L2 | GPU |
Nvidia | GeForce RTX 4080/4070Ti/4070 AI PCs | GPU |
AMD | MI300X | GPU |
AMD | MI300A | GPU |
AMD | Ryzen 8000-series AI PCs | APU |
Microsoft | Maia/Colbalt | CPU |
Amazon | Graviton4/Trainium2 | CPU |
Broadcom/Alphabet | TensorFlow TPU v6 | TPU |
Alphabet | Axion | CPU |
Meta | MTIA v1 | ASIC |
Intel | Falcon Shores | GPU |
Intel | Gaudi 2/3 | TPU |
Intel | Arrow Lake AI PCs | CPU |
Intel | Lunar Lake AI PCs | CPU |
Marvell Technology | 800G PAM4 DSP | DSP |
Qualcomm | Cloud AI 100 | GPU |
GPU = Graphics Process Unit; CPU = Central Processor Unit; TPU = Tensor Processor Unit; DSP = Digital Signal Processor; ASIC = Application Specific Integrated Circuit; APU = Accelerated Processing Unit; * = China Market version/configuration |
Next-generation products are an important driver of order activity, lead times and competition. NVDA recently announced a new AI GPU, the B100 (part of the Blackwell series: B100, B200 and GB200), which will be a substantial upgrade from H100.The B100 is system-compatible and will be priced more aggressively than initial expectations — with NVDA noting B100 pricing at $30,000-$40,000, representing an estimated 25-30% premium above the current H100 priced at $25,000. The B100 is expected to deliver three-to-five times performance boost from the H100. Once NVDA begins shipping the Blackwell chips, the Grace Blackwell (GB100) overall demand is expected to be strong. NVDA has spent more than a year developing the supply chain for the Blackwell architecture in anticipation of significant demand across its customer base. NVDA is quite motivated to use B100 to blunt the momentum of competition from AMD, INTC and Marvell (MRVL). Nvidia’s new Blackwell chip should ship by 4Q24.
During the 1Q2025 print NVDA announced a 10-for-1 stock split with a June 6 record date. However, the darkest cloud hanging over NVDA shares is by no doubt the valuation. NVDA’s valuation is not for the faint of heart as the stock is trading at 62 times trailing earnings. The price-to-book ratio stands at 56 times, compared to 6.3 times for the benchmark, the Philadelphia Semiconductor Index (SOX). However, due to NVDA’s hyper-projected profit growth the stock trades at just 42 times FY2025 EPS-adjusted estimate of $26.76. NVDA shares have traded at a five-year historical average P/E of 70.3 times earnings. So based on these forward EPS projections, the stock does not seem too overvalued!
It's generally believed NVDA’s overall revenue/EPS growth rates might begin to slow down into FY2026. Customers will be transitioning into the new Blackwell family of chips, which will replace the Hopper series. Further, profit growth will become more challenging as comparisons to previous “super-size” quarterly results becomes more daunting? That said, the AI spending boom has just really begun and NVDA is in the driver’s seat. Companies like AMD, INTC and MRVL are starting to ship their own AI chips to compete with NVDA.
Hyper-scalers Amazon, Google, IBM, Meta and Microsoft are all producing their own AI chips for their own data centers. Will this competition from outside chipmakers place a dent in demand for NVDA’s AI chips, or will organically grow chips from the hyper-scalers eventually do that? Only time will tell!
Important disclosure information
Asset allocation and diversifications do not ensure against loss. This content is general in nature and does not constitute legal, tax, accounting, financial or investment advice. You are encouraged to consult with competent legal, tax, accounting, financial or investment professionals based on your specific circumstances. We do not make any warranties as to accuracy or completeness of this information, do not endorse any third-party companies, products, or services described here, and take no liability for your use of this information.