Learn
Tech Outlook
Competition Heating Up in AI Chips
By Daniel Morgan, Synovus Trust Senior Portfolio Manager
Synovus Trust Company, N.A.
With an estimated 80 percent market share, Nvidia continues to hold the pole position in the AI chip race. But other traditional chip companies are in the pack — Advanced Micro Devices (AMD), Broadcom (AVGO), Intel (INTC) and Marvell Technology (MRVL). Nvidia has competition from many traditional/non-traditional chip players with Amazon (Trainium 3/Inferencia 3), Microsoft (Colbalt 100) and Meta (MTIA v1) all producing their own proprietary AI chips.
Amazon's Trainium and Inferentia chips have seen strong adoption. Over the past two years, Amazon has proven success with custom silicon as its Graviton chips (custom Arm-based CPUs) have accounted for more than 50% of new compute instances on AWS.
Broadcom has confirmed three-year targets for its AI chips of $60-90 billion in serviceable available markets (SAM). Currently, it is believed Broadcom (AVGO) has three existing known ASIC customers (Google’s TensorFlow TPU v6, Meta, ByteDance/Tik-Tok) and reportedly additional prospects (Apple) in the design phase. During the last quarterly results, management commented that AVGO had recently added an additional $10 billion order, expected to positively impact next summer’s results, which would significantly increase FY2026 projections. This partnership was later confirmed to be with Open AI to develop a custom chip for model training, utilizing a XPU semiconductor. This chip is designed for particular applications like ChatGPT. Broadcom closed out FY2024 with AI product revenue at a $14-$15 billion annual run-rate, more than doubling Year-over-Year (YoY). Further, AVGO has stated that it has added new customers, which includes Apple. Apple is thought to be working with Broadcom on a networking technology chip (ASIC), codenamed “Baltra", which is expected to be ready for mass production by 2026 (intended for internal use only).
AMD’s upcoming MI400X chip looks promising as Advanced Micro Devices can point to confidence in the growth trajectory for datacenter GPUs into 2H25, driven by MI355X ramp. Oracle’s recent MI355X deploy and expanding cloud engagements (including Amazon’s AWS potential) remain a key focus point as MI355X creates strong pipeline confidence and progression toward the MI450X-series plus rack-scale.
MRVL is expected to be major beneficiary of the aggressive spending on generative AI by its cloud customers. Marvell (most well-known for its close partnership with AWS and Microsoft) recently indicated that its AI revenues will likely exceed $1.5 billion in FY2025 and is on track to surpass $2.5 billion in FY2026 – an implied growth rate of 67%. Marvell’s revenues are expected to be driven by strong demand pull for its 800G PAM4 DSP chipsets and the 400ZR DCI solutions. In addition, Marvell’s ASIC programs at Microsoft and AWS are progressing well. Marvell’s Trainium 3 (3nm) program at AWS is on track to ramp in FY26 in high volumes. With Microsoft, the Maia AI ASIC chip program for Gen 2 (3nm) is also tracking to plan and to ramp in FY26.
After many missteps, Intel is finally poised to release "Jaguar Shores," a next-generation AI GPU. The product was initially conceived as the successor to Falcon Shores, but its direction has shifted. It is now developed as a standalone discrete GPU. This GPU is specifically engineered for AI inference tasks and high-performance computing applications. The primary goal of this development is to offer a competitive alternative to established market leaders, such as NVIDIA and AMD, within the rapidly growing AI sector.
Intel’s AI strategy pivot away from Falcon Shores was made after some of the chipmaker’s channel partners communicated that it would take Intel a long time to create formidable competition to Nvidia’s AI chip dominance in data centers.
With the off-again and on-again sanctions barring U.S.-made AI chips to be permitted to be sold in China, many Chinese chip companies and artificial-intelligence developers are building up their own arsenal of homegrown technology. These chip-makers are backed by the Chinese government and are determined to win the AI race against U.S. industry. Even though China remains far from being able to make chips that can rival the most advanced U.S. products — from the likes of Nvidia, AMD or Marvell. Still, Chinese companies are producing substitutes for Nvidia’s H20 chip, the most powerful AI processor that is permitted to be sold in China.
Recently, Shanghai-based MetaX announced the release of a new series of AI chips known as MXN. The company is also actively developing its next-generation C700 series, signaling its continued commitment to advancing AI chip technology. These developments position MetaX as a notable contender in the growing market for artificial intelligence hardware. Known as China's “little Nvidia,” Cambricon Technologies’ latest new Siyuan 590 AI chip has emerged as Beijing's most promising new chip challenger. The stock price doubled on enthusiasm that the less-than-decade-old company could become the leading supplier of AI chips to power China’s homegrown AI models, including DeepSeek. Alibaba, known for the Ali-Cloud datacenter and e-commerce businesses, has committed $53 billion over the next three years to develop its own AI chips to complement its highly rated AI Model “Qwen.” The company is not new to the chip game, with the Hanguang 800 that was released in 2019. The most capable of Beijing’s push is Huawei Technologies, with its Ascend AI chips. Huawei showed off a computing system that integrates 384 Ascend chips. Some analysts said the machine, although a power hog, was more powerful on some metrics than Nvidia’s top-of-the-line system containing 72 Blackwell chips.
Micron Technology Benefits from Increased AI Demand
Micron Technology (MU) reported better-than-expected earnings and revenue recently as well, as a robust forecast for the upcoming current quarter (1Q26). Earnings per share were $3.03 adjusted versus $2.86 expected. Revenue was $11.32 billion versus $11.22 billion expected. Micron said revenue in the upcoming current period (1Q26) will be about $12.5 billion versus the $11.94 billion average analyst estimate. The company said it had $3.2 billion, or $2.83 per share in net income, versus $887 million — or 79 cents per share in the year-ago period. Micron shares have nearly doubled so far in 2025. The company makes memory and storage, which are important components for computers. Micron has been one of the winners of the AI boom. That’s because high-end AI chips, such as those made by Nvidia, require increasing amounts of high-tech memory called high-bandwidth memory or HBM, which Micron makes. MU is benefiting from an improved pricing environment, especially in DRAM across multiple markets, including AI/datacenter, smartphones and personal computers. MU’s HBM mix is coming in better than expected as well (mixing up higher ASPs) on solid execution of its HBM3e 12-Hi ramp. Overall, given the new HBM/eSSD demand drivers AND coming off one of the worst downturns in the history of the industry, it’s highly probable the memory segment recovery will look similar to the 2017-18 upturn, where MU drove eight consecutive quarters of positive EPS revisions, out-performing EPS estimates.
AI is very memory intensive on both AI training and inference. MU believes AI memory will be a strong growth sector going forward. AI will be utilizing high bandwidth of DDR5 (Double Data Rate5) memory and HBM4 (High Bandwidth Memory) with MU receiving its fair share. See demand for HBM remaining strong as large language models (LLM) continue to grow in size exponentially with ChatGPT developing its next foundational model, GPT-5. MU’s peer SK Hynix has projected that the HBM market’s compound annual growth rate (CAGR) will be 30% through 2030. Micron is providing its current generation of HBM3E memory for Nvidia's Blackwell GPUs (GB300), while its HBM4 technology is still in development and is expected to be used in Nvidia's next-generation Rubin platform in 2026. The HBM market is on track to grow greater than 300% this year, driven by exponential growth in GenAI model complexity and intense competition among GPU/accelerator suppliers. HBM4 ramps starting in FY2026 could lead to average pricing up another 21% YoY uplift as trade ratio increases, while die sizes are estimated to be 15-20% bigger. With an increasing mix of DRAM bits dedicated to HBM versus traditional DRAM at a 5x premium, we estimate a stronger FY2026 pricing environment for MU and other suppliers. HBM is expected to generate $5.295 billion in revenues for FY2025 and projected to more-than-double sales into FY2026 to $10.669 billion. Expect AI-driven growth to remain a tailwind into 2025, as the smartphone adoption of AI features broadens beyond premium models.
The HBM4 competitive landscape is dominated by three main players: SK Hynix, Samsung and Micron. SK Hynix currently holds a significant lead, but Samsung and Micron are aggressively competing to close the gap, especially for major AI hardware customers like Nvidia and AMD. The competition centers on product timing, performance, power efficiency and customer qualifications. As the market leader in previous HBM generations (HBM3/HBM3E), SK Hynix has a head start and a strong partnership with Nvidia. SK Hynix has completed internal validation and quality assurance for its HBM4 chips and is preparing for mass production in the second half of 2025. Micron shipped HBM4 samples to customers in the first half of 2025, featuring a bandwidth above 2.0 TB/s and improved power efficiency. It plans to ramp up production in 2026.
MU specializes in making DRAM and NAND memory chips. DRAM is the most common memory chip used in PCs and servers, while NAND chips are the flash memory chips used in smart phones and USB ports. In previous calls with MU management, the company has highlighted additional uses for DRAM/NAND memory chips beyond PCs/servers/smart phones to include Auto/EV, datacenter cloud, gaming GPUs and industrial applications, to name several. Continue to see strong FY2025 pricing tailwinds for both DRAM and NAND, as tighter supply creates upside for 2H25. As more DRAM bits are dedicated to HBM production, see the higher trade ratio driving much tighter supply for legacy DRAM products, including DDR5 and LP5, leading to increased prices for MU and stronger GMs. Additionally, the PC refresh cycle remains in the early stages, potentially seeing another avenue for better DRAM pricing in the near-term. AI at the edge could drive need for higher DRAM content for smartphones, PCs, smart glasses, smartwatches and vehicles, while humanoid robots could also drive some demand. For NAND, see tighter supply following capacity cuts in 2024 driving better pricing in the short-term. DRAM upturns typically last from six to eight quarters, and do not be surprised if the current upturn lasts longer given the severity of the recent downturn. Investors will be tuning into see if MU’s forecast provides further evidence that the “recovery is still on track” for this recent downturn in memory demand.
MU stock trades at 3.4x book value (BV), which is at the upper bounds of its five-year historical 2.0x BV. On the earnings front, the valuation appears more palatable with shares trading at 12x FY2026 EPS estimates of $12.91 a share, compared to an average P/E multiple over the past 10 years of 20x. So, MU stock is not cheap at many levels. However, with big expectations surrounding the positive impact of AI from DDR5/HBM3 memory demand — the premium seems warranted.
Important disclosure information
Asset allocation and diversifications do not ensure against loss. This content is general in nature and does not constitute legal, tax, accounting, financial or investment advice. You are encouraged to consult with competent legal, tax, accounting, financial or investment professionals based on your specific circumstances. We do not make any warranties as to accuracy or completeness of this information, do not endorse any third-party companies, products, or services described here, and take no liability for your use of this information.