Home| Features| About| Customer Support| Leave a Review| Request Demo| Our Analysts| Login
Gallery inside!

Micron’s Stock Might Be an Excellent Alternative Play for Ai Investors Who Want to Diversify Beyond Nvidia

September 22, 2023
minute read

Investors have demonstrated their strong interest in artificial intelligence throughout the current year, notably contributing to the remarkable performance of Nvidia Corp. shares. However, it is essential to acknowledge other hardware companies that supply products integral to data centers whenever an Nvidia graphics processing unit is deployed. Looking ahead, the introduction of advanced high-bandwidth memory technology in the coming year holds the promise of propelling Micron Technology Inc. into a favorable position within this dynamic landscape. Such a move could offer U.S. investors an appealing avenue for embracing a more diversified approach to the realm of artificial intelligence.

Nvidia's stock (NVDA, +1.08%) has yielded an impressive return of 189% over the course of this year when considering reinvested dividends. This notable surge can be attributed to the company's robust production and sales of the H100 GPU, initially introduced during the third quarter of Nvidia's fiscal year 2023, which concluded on October 30.

The momentum behind H100 sales continued into the first quarter of Nvidia's fiscal year 2024, concluding on April 30. Notably, the company's announcement of anticipated sales for the fiscal second quarter, projected at approximately $11 billion, representing a 53% surge from the fiscal first quarter, took investors by surprise. Even more astonishing was the revelation on August 23, when Nvidia disclosed its results for the second quarter of fiscal year 2024 (ending July 30), indicating quarterly sales of $13.51 billion. This marked an 88% sequential increase and an astounding 101% surge from the corresponding period in the prior year.

Turning our attention to Micron (MU, +1.49%) and its prospects within the AI technology landscape, Nvidia's press release on August 23 announced the forthcoming shipment of the GH200 Grace Hopper Superchip during the current quarter, with a second-generation version featuring HBM3e memory set to launch in the second quarter of calendar year 2024.

HBM, which stands for high-bandwidth memory, plays a pivotal role in contemporary AI technology. As described by Pat Srinivas, an analyst with the Buffalo International Fund, current AI technology is akin to "drinking from a fire hose" due to its immense data requirements for training and inference. Consequently, HBM is a critical component for providing extensive memory capacity while minimizing energy consumption.

As Srinivas observed, Micron is not a participant in the first-generation HBM3 market, but the company has plans to unveil a next-generation HBM3 product early next year. Based in Boise, Idaho, Micron's shares are listed on the Nasdaq.

During Micron's earnings call on June 28, CEO Sanjay Mehrota expressed optimism regarding the prospect of "meaningful revenues" stemming from the new HBM3 product in the company's fiscal year 2024, which commenced on September 3. Mehrota further noted that some of Micron's customers had initiated sampling of the new HBM3 product, and the response had been highly favorable.

Sumit Sadana, Micron's Chief Business Officer, elaborated during the post-earnings call with analysts, emphasizing that the company's forthcoming next-generation HBM3 chips demonstrated superior performance, bandwidth, and power efficiency compared to existing HBM3 products in the market. Moreover, Sadana discussed a "special partnership" with Nvidia aimed at developing a low-power memory product for integration with the Nvidia GH200. This collaboration represents another potential catalyst for Micron as the AI GPU market continues to evolve.

The next-generation HBM3 products scheduled for release next year are referred to as "HBM3e" both by Nvidia and by TrendForce, which published its projections for the 2024 rollout by Micron, SK Hynix, and Samsung on August 1.

Srinivas cited an August 17 report by HPC Wire, estimating an average cost of $30,000 for Nvidia's "flagship H100 GPU (featuring 14,592 CUDA cores, 80GB of HBM3 capacity, and a 5,120-bit memory bus)." Additionally, private research indicated an estimated cost ranging from $1,200 to $1,380 for HBM required for each H100 GPU installation, roughly equating to 4-5% of the total cost.

Nevertheless, Srinivas cautioned that the semiconductor industry is characterized by rapid technological advancements. He also emphasized that even if Micron were to capture a 25% share of the total HBM market, this would account for only about 10% of their estimated $20 billion in revenue, a figure closely aligned with the consensus estimate of $20.3 billion in revenue for Micron's fiscal year 2024. For ease of comparison, subsequent references will be based on calendar years.

Mizuho Securities analyst Vijay Rakesh, in a note to clients on August 31, highlighted Micron's aspiration to secure a 25% market share in the HBM segment, echoing the potential significance of this strategic pursuit.

Cathy Hills
Associate Editor
Eric Ng
John Liu
Editorial Board
Bryan Curtis
Adan Harris
Managing Editor
Cathy Hills
Associate Editor

Subscribe to our newsletter!

As a leading independent research provider, TradeAlgo keeps you connected from anywhere.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Related posts.