Yesterday, Nvidia crossed the $1 trillion of market cap mark for the first time, making it only the fifth American Company to do so. Nvidia follows in the footsteps of Apple, Microsoft, Google, and Amazon (Saudi Aramco is the only non-U.S. based Company to ever cross the $1 trillion threshold).
To state the obvious, this is a big deal. While “$1 trillion” is only a notable benchmark due it being a round number, only a handful of companies have ever reached this scale. What makes Nvidia unique in this accomplishment is that it has done so in what has been a relatively under-the-radar fashion. For the last decade, much has been written about the era of “Big Tech”. In practice, this traditionally referred to the once codified “FAANG” companies, which sometimes included Microsoft (unfortunately, due to multiple rebrandings, this group of companies has lost its catchy nickname). While Netflix and Meta have struggled in the public markets over the last few years, they haven’t lost much market share in the press: they’re still very top of mind with the general public. Every other company in this group is still on top of the world.
Nvidia, on the other hand, has never been a part of these snappy acronyms and certainly hasn’t been as top of mind for the general public. Recently, however, that’s changed. In a year in which the Nasdaq is down ~11% YTD, Nvidia’s share price has climbed ~180%. For a small publicly traded Company, that would be a massive accomplishment. For a Company the size of Nvidia, it’s completely unheard of. In what has been a challenging year so far for American technology companies, Nvidia has managed to add ~$650B of market cap over a five month period: that’s more than the entire market caps of companies like Visa, LVMH, UnitedHealth, or JPMorgan Chase. Excluding Nvidia itself, there are only seven companies on the planet worth more than the market cap Nvidia has added this year alone.
Any way you slice it, Nvidia has experienced one of the greatest stock runs in history in the face of a Tech bear market. At the apex of this growth, Nvidia added $184B of market cap in a single day last week, more than the entire market value of most of its direct competitors (below graphic from the WSJ):
In summary, Nvidia’s stock run in 2023 feels like the public markets equivalent of the 90’s Chicago Bulls. But where did this explosion in value come from? What has driven what has arguably been the greatest six month stock run in history? If you’ve been reading any of these posts, you probably already know where this is going: AI.
First things first: what does Nvidia actually do?
Since being founded in 1993, Nvidia has made a name for itself as a designer of semiconductors (colloquially referred to as “chips”), the underlying hardware that powers computers. Critically, Nvidia primarily only designs and distributes their chips, and does not manufacturer chips themself (the vast majority of the world’s cutting edge chips are manufactured by TSMC, as detailed in this fantastic Forbes piece written by Radical partner Rob Toews).
By outsourcing the production of their own chips, Nvidia has been able to avoid the operational challenges of managing a hardware business. In fact, despite being closely affiliated with the underlying hardware, Nvidia is extremely profitable: the Company has typically enjoyed EBIT margins in the 25-35% range, with most analysts expecting this to expand to 40-50% in 2024. As a comparable, the famously profitable Apple has experienced impressively consistent EBIT margins of 30% over the past few years, and is expected to maintain that level of profitability going forward.
Nvidia is most well known for designing high end GPU’s (graphics processing units). When I was growing up, beefy GPU’s were most closely affiliated with powering cutting edge gaming PC’s and video game consoles. As the market for gaming grew, so did demand for Nvidia’s most high end and lucrative chips (as such, gaming protagonists frequently graced the packaging of Nvidia’s GPUs). Over the past 5-10 years, GPU’s found a new critical use case: powering Bitcoin mining (though, as the cost of crypto mining has skyrocketed, this need has waned considerably).
In more recent years, however, GPU’s have found a new, incredibly lucrative market: powering AI models (colloquially referred to as providing the “compute” for both the training and serving of models). We have talked ad nauseum about the astonishing costs of compute needed for AI companies. As these insatiable models gobble up more and more data to improve performance, they need to use massive amounts of GPU’s to do so. While there is demand in the market to reduce the per FLOP cost of compute, and Silicon Valley is abuzz with interest in emergence of “small models” (as opposed to “Large” models, such as LLM’s), the reality is that the aggregate demand for compute resources dramatically outstrips the world’s current supply, and will continue to do so for the foreseeable future.
Simply put, there are not nearly enough chips to go around, and it seems like every company in the world is willing to pay astronomical prices to get their hands on them. Even if you’re willing to pay way above market prices, many companies still cannot get their hands on high end GPU’s: the hysteria around chip acquisition has turned GPU procurement into a key point of differentiation amongst AI startups.
This explosion of demand for chips to power AI models is the primary driver for Nvidia’s incredible stock run: management and analysts have revised estimates up and up as the totally unprecedented demand boosts Nvidia’s bottom line: analysts are currently expecting Nvidia to more than double EBITDA in 2024.
In the past, we’ve talked about how businesses are valued on trading multiples. Higher multiple businesses are generally viewed by the market as “better” than lower multiple businesses (e.g., they’re growing faster, are more defensible, more capital efficient, etc.). Nvidia is currently trading at an astounding 36x LTM revenue / 91x LTM EBITDA (according to Pitchbook), while its Semiconductor Comps Set (again, as defined by Pitchbook) trade at median multiples of 7x LTM revenue / 22x LTM EBITDA.
As a point of comparison, Apple (another “premium multiple” asset) is “only” trading at 7x LTM revenue / 23x LTM EBITDA. While Apple is obviously not a semiconductor Company, it notably trades in-line with Nvidia’s primary competitors. Clearly, public markets analysts are viewing Nvidia as more favorably positioned than any of its competitors, or anyone else in Big Tech.
But why is all this demand for AI models impacting Nvidia more than anyone else? As we mentioned above, Nvidia doesn’t actually manufacture its own chips. There have always been other semiconductor designers: why haven’t they capitalized on this moment in time? Basic business theory states that when one firm experiences outsized profits in an industry, other new entrants emerge, supply and competition increases, and industry profits come down. Why isn’t this happening to Nvidia?
Nvidia’s secret sauce
As we discussed a few weeks ago, building with AI is still extremely hard to do. Despite the explosion in market demand, there are still very limited numbers of people in the world who can actually work with Foundation models, and even for the experts, modern AI models can be challenging beasts to corral. The reality is that the technology here is still very new and is evolving at a rapid pace: unlike building traditional software architectures, there hasn’t been a lot of time and attention spent on making AI models easier to build and manage (as an aside, many startups have emerged in this “tooling layer” in the last few months, but few have managed to make much of a dent in that process yet).
One of the most technically challenging pieces about working with AI models is managing the actual training runs of the models themselves: it’s very easy to have a run fail for one reason or another, which can be both time consuming and costly. As such, companies place a premium on the resources that are the easiest to build and work with. To their credit, Nvidia realized this years ago and invested meaningful capital and engineering resources in designing proprietary software for their chips (including the current king of AI chips, the Nvidia A100) that makes them easier to interact with.
Basically every practitioner that we’ve spoken with has noted that Nvidia’s chips are way easier to interact and build with than any other AI chip (primarily due to Nvidia’s proprietary software stack). That, combined with the fact that they have largely outperformed competitive chips in head-to-head performance, has helped to make Nvidia’s GPU’s (and in particular, the A100) industry standard. Every AI practitioner learns on A100s, is most comfortable with A100s, and demands A100s. They’ve become both the industry standard and are seen as the most premium offerings in the industry. Given the A100’s enduring lead amongst chip offerings, Companies are already frothing at the mouth to get their hands on Nvidia’s next generation offering: the H100 chip, which is expected to have its own coming out party this year. Early reports suggest that the H100 represents a meaningful performance improvement above the A100, which may add to Nvidia’s already considerable lead.
That’s not to say that competitors aren’t trying. Google famously developed their own alternative to Nvidia’s GPU’s (dubbed TPU’s, or Tensor Processing Units), which are chips specifically engineers to serve AI models. So far, cutting edge TPU’s tend to get fairly close to where Nvidia’s A100’s shake out in terms of pure performance, but in general TPU’s are more difficult to use. Unlike Nvidia’s software stack, Google’s software isn’t as dependable or easy to use, and most AI technologists don’t have experience working with it. That’s not to say that it’s not usable or learnable, but it’s certainly not easy to flip overnight from working with GPU’s to TPU’s: that process is time intensive, requires significant training, and is generally a headache for those building with AI.
Famously, former semiconductor darling Intel has completely missed the boat on AI, and is now scrambling to try to make up for years of R&D and hundreds of millions of dollars of spend. While Intel is doing all it can to join the party late, most analysts are skeptical that it can catch-up to Nvidia or Google anytime soon.
Pulling it all together: how Nvidia is winning the era of AI
In conclusion, Nvidia’s stock run has clearly been the result of a perfect storm of macroeconomic conditions. However, that’s not to say they just got lucky: I think it’s more of a result of several brilliant strategic decisions coming together all at the same time. In summary, some of the most important factors of their recent stock run in my mind are the following:
Spotting a megatrend early and investing aggressively behind it when it was non-obvious
After AlexNet was famously trained on two just two Nvidia GPU’s in 2012, Nvidia committed hundreds of millions of dollars to the development of AI-specific chips, many years before AI saw widespread adoption
Outsourcing chip production by forging a deep partnership with TSMC, the best chip manufacturer in the business
Unlike rival Intel (which has tried for years to product their own chips), Nvidia focused on their core competency of chip design and has avoided the capital intensive and time consuming processes of chip manufacturing
Achieving best-in-class performance in a category in which the extra 1% can make a world of difference
In an industry that’s fairly easy to benchmark, performance will always matter. Objectivity is hard to argue with, and the biggest AI models demand the best possible hardware in order to stay competitive
Developing a deeply differentiated an easy-to-use software stack that technologists love
In what is arguably their biggest point of differentiation, Nvidia has developed the most flexible and intuitive software stack for AI training around. Despite every competitor trying to catch-up, nobody has been able to gain meaningful ground on Nvidia: this is a deep tech moat that doesn’t appear to be shrinking any time soon