AMD’s Lisa Su is ready to crash Nvidia’s trillion-dollar chips party
After a couple decades of software eating the world, the “silicon” part of Silicon Valley is back. It turns out, it takes hard-core hardware—lots of it—to bring the wonders of generative AI[1] to life, and chipmaker Nvidia[2] has seized the moment with its powerful graphics processors to become the market’s reigning champ.
Demand for Nvidia’s AI-friendly processors is so strong that in May, investors awarded the company a stock market valuation above $1 trillion[3], roughly on par with the GDP of Saudi Arabia[4] last year. Chips[5] certainly have the potential to become as vital as oil in an AI-driven economy—but in the fast-moving tech business, even a market leader can’t sit back and rely on proven reserves to prolong its primacy.
For Jensen Huang, Nvidia’s leather-jacket-wearing CEO, the most serious threat comes from a crosstown chipmaker armed with a unique combination of graphics-processing prowess and a corporate identity forged by years of taking on giants. Led by Lisa Su[6], AMD[7] is aiming to take a sizable chunk of the AI chip market and, as the AI revolution unfolds, even to displace Nvidia as industry leader.
“I think this is an opportunity for us to write the next chapter of the AMD growth story,” Su told Fortune in a mid-September interview. “There are so few companies in the world that have access to the [intellectual property] that we have and the customer set that we have, and the opportunity frankly to really shape how AI is adopted across the world. I feel like we have that opportunity.”
Su has good reason to talk up AMD’s chances in an AI chip market she predicts will be worth $150 billion by 2027. Her company may be best known as the perennial second fiddle to Intel[8] in the market for PC and server microprocessors, but thanks to its 2006 acquisition of ATI[9], a Canadian chipmaker focused on video game accelerators, it’s now the number two player in GPUs—the type of chips that are so well suited to training AI models like OpenAI’s GPT-4 and Google’s Bard.
With Nvidia, which by some estimates has over 90% of the AI training market, struggling to meet demand for its most powerful AI chip—the H100—Su is readying a direct assault with the release of AMD’s rival MI300 this quarter.
“There will certainly be scenarios in 2024 when we would imagine Nvidia GPUs are sold out and customers only have access to AMD, and AMD can win some business that way, just based on availability,” says Morningstar tech sector director Brian Colello.
“The trillion-dollar question for Nvidia’s valuation is twofold: How big is this market, and how dominant will they be in this market?” Colello says. It’s one thing if Nvidia can command 95% of the AI market, relegating AMD to a distant second place. But if it’s a 70/30 split, he says, “that would be very good for AMD.”
As for how that split will turn out, it all comes down to battles over performance, flexibility, and—in this time of supply-chain uncertainty—availability, areas where Su is particularly battle-tested.
Distant relatives, close competitors
While the Ferrari-driving, sometimes brash Huang is running the company he founded, Su—deliberate, plainspoken, and with a reputation as a people person—is entering her 10th year in charge of an even older company that she’s widely credited with rescuing.
An immigrant from Taiwan at the age of 3—she and Huang both hail from the city of T’ai-nan, and are indeed distant relatives—Su went on to become a highly respected electrical engineer at IBM[10], where she headed up the emerging products division. After a stint as Freescale Semiconductor’s chief technology officer, Su joined AMD as a senior vice president in 2012, before becoming president and CEO two years later.
Although Su’s initial strategy for saving AMD involved diversifying its product lines beyond the PC market into areas like gaming[11] and high-performance computing, the company’s rivalry with Intel still demanded a lot of focus. That may have been unavoidable, but analysts say it gives Nvidia an advantage today.
Nvidia spent the past decade investing heavily in making it easier for developers to tap into the parallel-processing chops of its GPUs when building their data-chomping applications—through an interface called CUDA—but AMD’s rivalry with Intel in CPUs meant “it hasn’t invested as deeply into AI-based software for its data center GPUs,” said Gartner[12] analyst Alan Priestley.
“AI is an opportunity for us to write the next chapter of the AMD growth story.”
Lisa Su, AMD CEO“Nvidia’s castle walls are its software ecosystem,” said Priestley. In comparison to CUDA, AMD’s ROCm programming stack has a reputation for being buggier and a heavier lift.
Gregory Diamos, cofounder of the AI startup Lamini and a former CUDA architect at Nvidia, says he believes AMD is closing the gap. “AMD has been putting hundreds of engineers behind their general-purpose AI initiative,” he says.
But even Su acknowledges there’s work to be done. “I will be the first to say that our hardware is excellent and our software is continuing to get better over time,” she says. “For some of the AI applications that were written in the past, migrating them to AMD does take some work.” However, she argued, ROCm is “very well optimized” for newer AI workloads.
AI’s next phase
AMD’s big opportunity could come with the natural evolution of AI.
AI companies love GPUs because of their ability to perform many tasks simultaneously. The computing muscle necessary for rendering rich and fast-moving graphical images in video games can easily be used to train large language models like OpenAI’s GPT-4[13] on vast amounts of raw data in a relatively quick amount of time.
But many analysts believe the bigger part of the AI market lies not in training LLMs, but in deploying them: setting up systems to answer the billions of queries that are expected as AI becomes part of everyday existence. This is known as “inference” (because it involves the AI model using its training to infer things about the fresh data it is presented), and whether GPUs remain the go-to chips for inference is an open question.
As so-called hyperscalers like Meta and Google are already demonstrating with their efforts to develop in-house AI chips like TPUs, the big players will ultimately want silicon that is specialized for efficiently delivering AI services. Many also see a big role for CPUs in the inference market, a shift that would play to AMD’s traditional strengths.
The forthcoming MI300-series data center chip combines a CPU with a GPU. “We actually think we will be the industry leader for inference solutions, because of some of the choices that we’ve made in our architecture,” says Su.
Morningstar’s Colello agrees that the market is evolving—and isn’t counting out AMD nemesis Intel’s own efforts to challenge Nvidia with its new AI processors, Gaudi2 (for training) and Greco (for inference). “There’s naturally plenty of incentive for all of these companies to not be beholden to Nvidia, and to want more competition, and to write the software and transfer the models and take all the steps necessary to ensure a healthy ecosystem that includes Nvidia plus AMD plus perhaps Intel and also their own internal chips that they’re all developing,” he said.
Nvidia, for its part, says that it’s “intensely focused” on inference. The company’s GPU inference performance has increased eightfold over the past year, and Nvidia is “investing in our road map for inference,” a spokesperson told Fortune.
The company also says it recognizes that it won’t be the only game in town forever and that it’s natural for customers to want choice from multiple suppliers. “A competitive ecosystem is positive for the AI space as it accelerates the state of the art more quickly and efficiently, and we certainly encourage and welcome competition,” said an Nvidia spokesperson.
Su isn’t just targeting the data centers that dish up AI capabilities through the cloud—AI will also need to run directly on people’s personal devices and other internet-connected gadgets. “Success,” she told Fortune, “is that we really capture a significant proportion of the AI compute usage.”
Of course, Nvidia is also targeting every one of those segments, and is now even trying to get into the high-performance CPU business with a new “superchip” called Grace, which it is bundling with the H100 GPU. “Nvidia still wants to take over the inference market, too, and it’s possible they might,” said Colello. “For any bullish investor on Nvidia, they are probably assuming a clean sweep of every type of AI process runs through an Nvidia chip and/or network.”
But even if Nvidia stays on top, Colello sees AMD’s strong second place as “very enviable and [making] for a great business.” And Su is adamant that her company will make the most of the AI explosion.
“It has become abundantly clear, certainly with the adoption of generative AI in the last year, that this [industry] has the space to grow at an incredibly fast pace,” she said. “We’re looking at 50% compound annual growth rate for the next five-plus years, and there are few markets that do that at this size when you’re talking about tens of billions of dollars.”
This article appears in the October/November 2023 issue[14] of Fortune with the headline, “AMD is ready to crash Nvidia’s party.”
References
- ^ generative AI (fortune.com)
- ^ Nvidia (fortune.com)
- ^ valuation above $1 trillion (fortune.com)
- ^ Saudi Arabia (fortune.com)
- ^ Chips (fortune.com)
- ^ Lisa Su (fortune.com)
- ^ AMD (fortune.com)
- ^ Intel (fortune.com)
- ^ ATI (fortune.com)
- ^ IBM (fortune.com)
- ^ gaming (fortune.com)
- ^ Gartner (fortune.com)
- ^ OpenAI’s GPT-4 (fortune.com)
- ^ October/November 2023 issue (fortune.com)