NVIDIA Corporation (NASDAQ:NVDA) Q4 2023 Earnings Call dated Feb. 22, 2023.
Corporate Participants:
Simona Jankowski — Vice President, Investor Relations
Jensen Huang — Founder, President and Chief Executive Officer
Colette Kress — Executive Vice President and Chief Financial Officer
Analysts:
Aaron Rakers — Wells Fargo — Analyst
Vivek Arya — Bank of America Merrill Lynch — Analyst
C.J. Muse — Evercore — Analyst
Matt Ramsay — Cowen — Analyst
Timothy Arcuri — UBS — Analyst
Stacy Rasgon — Bernstein — Analyst
Mark Lipacis — Jefferies & Co. — Analyst
Atif Malik — Citi — Analyst
Joseph Moore — Morgan Stanley — Analyst
Presentation:
Operator
Good afternoon. At this time. I would like to welcome everyone to NVIDIA Fourth Quarter Earnings Call [Operator Instructions].
Thank you. Simona Jankowski, you may begin your conference.
Simona Jankowski — Vice President, Investor Relations
Thank you. Good afternoon, everyone, and welcome to NVIDIA’s conference call for the fourth quarter of fiscal 2023. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer.
I would like to remind you that our call is being webcast live on NVIDIA’s Investor Relations website. The webcast will be available for replay until the conference call to discuss the financial results for the first quarter fiscal 2024. The content of today’s call is NVIDIA’s property. It can’t be reproduced or transcribed without our prior written consent.
During this call we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business. Please refer to the disclosure in today’s earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission.
All our statements are made as of today, February 22, 2023, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.
During this call we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that let me turn the call over to Colette.
Operator
Thank you, Simona, Q4 revenue was $6.05 billion, up 2% sequentially and 21% year-on year. Full year revenue was $27 billion, flat from the prior year. Starting with data center, revenue was $3.62 billion, was down 6% sequentially and up 11% year-on year. Fiscal year revenue was $15 billion and up 41%. Hyperscale customer revenue posted strong sequential growth, fell short of our expectations as some cloud service providers paused at the end-of-the year to recalibrate their build plans. So we generally see tightening that reflects overall macroeconomic uncertainty. We believe this is a timing issue. At the end market demand for GPUs and AI infrastructure is strong.
Networking grew, but a bit less than our expected on softer demand for general-purpose CPU infrastructure. The total data center sequential revenue decline was driven by lower sales in China, which was largely in-line with our expectations, reflecting COVID and other domestic issues. With cloud adoption continuing to grow, we are serving an expanding list of fast-growing cloud service providers, including Oracle and GPU specialized CSPs. Revenue growth from CSP customers last year significantly outpaced that of data center as a whole, as more enterprise customers moved to a cloud first approach.
On a trailing four quarter basis CSP customers drove about 40% of our data center revenue. Adoption of our new flagship H100 data center GPU is strong. In just the second quarter of its ramp H100 revenue was already much higher than that of A100 which declined sequentially. This is a testament of the exceptional performance on the H100 which is as much as 9x faster than the A100 for training and up 30x clusters and inferencing our transformer based large language models. The Transformer engine of H100 arrived just in time to serve the development and scale-out of inference of large language models.
AI adoption is at an inflection point. Open AI, ChatGPT has captured interest worldwide allowing people to experience AI firsthand showing what’s possible with generative AI. These new types of neural network models can improve productivity in a wide range of task, whether generating tax like marketing copy, summarizing documents like [Technical Issues] creating images for ads or video game or answering customer questions. Generative AI applications will help almost every industry do more faster.
Generative large language models with over 100 billion parameters are the most advanced neural networks in today’s world. NVIDIA expertise spans across the supercomputers, algorithms, data processing and training methods that can bring these capabilities to enterprise. We look-forward to helping customers with generative AI opportunities. In addition to working every major hyperscale cloud provider we are engaged with many consumer Internet companies, enterprises and startups. The opportunity is significant and driving strong growth in the data center that will accelerate through the year.
During the quarter we made notable announcements in the financial services sector, one of our largest industry verticals. We announced a partnership with Deutsche Bank to accelerate the use of AI and machine-learning in financial services. Together, we are developing a range of applications including virtual customer service agents, BTI, fraud detection and bank process automation leveraging NVIDIA’s full computing stack both on-premise and in the cloud, including NVIDIA AI Enterprise Software.
We also announced that NVIDIA captured leading results for AI inference in a key financial services industry benchmark for applications, such as asset price discovery. In networking we see growing demand for our latest generation InfiniBand and HPC optimized Ethernet platforms built by AI. Generative AI foundation model sizes continue to grow at exponential rates, driving the need for high-performance networking to scale-out multi node accelerated workloads.
Delivering unmatched performance latency and in-network computing capabilities, InfiniBand is the clear choice for power-efficient cloud scale generative AI. For smaller scale deployments NVIDIA’s, bringing its full accelerated stack expertise and integrating it with the world’s most advanced high performance Ethernet fabrics. In the quarter, InfiniBand led our growth as our Quantum-2 40 gigabit per second platform is off to a great start, driven by demand across cloud, enterprise and supercomputing customers.
In Ethernet, our 40 gigabit per second Spectrum-4 for networking platform is gaining momentum as customers transition to higher speeds, next-generation adapters and switches. We remain focused on expanding our software and services. We released Version 3.0 of NVIDIA AI Enterprise with support for more than 50 NVIDIA AI frameworks and pre-trained models and new workflows for contact center intelligent virtual assistant, audio transcription and cyber security. Upcoming offerings include our NeMo and BioNeMo large language models services which are currently in early access with customers.
Now let me Jensen to talk a bit more about our software and cloud adoption.
Jensen Huang — Founder, President and Chief Executive Officer
Thanks. Colette. The accumulation of technology breakthroughs has brought AI to an inflection point. Generative AI’s versatility and capability has triggered a sense of urgency at enterprises around the world to develop and deploy AI strategies. Yet the AI supercomputer infrastructure model algorithms, data processing and training techniques remain an insurmountable obstacle for most.
Today. I want to share with you the next level of our business model, to help put AI within reach of every enterprise customer. We are partnering with major service — cloud service providers to offer NVIDIA AI cloud services, offered directly by NVIDIA and through our network of go-to-market partners and hosted within the world’s largest clouds.
NVIDIA AI-as-a-Service offers enterprises easy access to the world’s most advanced AI platform, while remaining close to the storage, networking, security and cloud services offered by the world’s most advanced clouds. Customers can engage NVIDIA AI, cloud services at the AI supercomputer, acceleration library software or pre-trained AI layers. NVIDIA DGX is an AI supercomputer and the blueprint of AI factories being built around the world. AI supercomputers are hard and time-consuming to build.
Today we are announcing the NVIDIA DGX cloud, the fastest and easiest way to have your own DGX AI supercomputer, just open your browser. NVIDIA DGX cloud is already available through Oracle cloud infrastructure, and Microsoft Azure, Google GCP and others underway. At the AI platform software layer customers can access NVIDIA AI Enterprise for training and deploying large language models for other AI workloads.
And at the pre-trained generative AI model layer we will be offering NeMo and BioNeMo customizable AI models to enterprise customers who want to build proprietary generative AI models and services for their businesses. With our new business model customers can engage NVIDIA’s full scale of AI computing across their private and any public cloud. We will share more details about NVIDIA AI cloud services at our upcoming GTC. So be sure to tune in.
Now let me turn it back to Colette on gaming.
Colette Kress — Executive Vice President and Chief Financial Officer
Thanks, Jensen. Gaming revenue of $1.83 billion was up 16% sequentially and down 46% from a year ago. Fiscal year revenue of $9.07 billion, down 27%. Sequential growth was driven by the strong reception of our 40 Series GeForce RTX GPUs, based on the Ada Lovelace architecture. The year-on year decline reflects the impact of channel inventory correction, which is largely behind us. And demand in the seasonally strong fourth quarter was solid in most regions. While China was somewhat impacted by disruptions related to COVID we are encouraged by the early signs of recovery in that market.
Gamers are responding enthusiastically to the new RTX 490, 408, 470 Ti desktop GPUs with many retail and online outlets quickly selling out of stock. The flagship 490 has quickly shot up in popularity on steam, claimed the top spot for the Ada architecture, reflecting gamers desire for high-performance graphics. Earlier this month, the first phase of gaming laptops, based on the Ada architecture reached retail shelves delivering NVIDIA’s largest-ever generational leap in performance and power efficiency. For the first time we are bringing enthusiast-class GPU performance to laptops as slim as 14 inches, a fast-growing segment previous limited to basic task and apps.
In another first, we are bringing the 95 [Phonetic] CPUs our most performance models to laptops, thanks to the power efficiency of our 5th generation Max Q technology. All-in RTX 40 Series GPUs, the power over 170 gaming and creator laptops setting up for a great [indecipherable]. There are now over 400 games and applications supporting NVIDIA’s RTX technologies for real-time ray tracing and AI-powered graphics. The Ada architecture, features DLSS 3 our third-generation AI-powered graphics which massively raise performance. One of the most advanced games Cyberpunk 2077 recently added DLSS 3 enabling a three to four x boost in frame rate performance at 4K resolution.
Our GeForce NOW cloud gaming service continues to expand in multiple dimensions, users, titles and performance. It now has more than 25 million members in over 100 countries. Last month, it enabled RTX 4080 graphics horsepower in the new high-performance ultimate membership tier. Ultimate members can stream at up to 240 frames per second from a cloud with full ray tracing in DLSS 3. And just yesterday, we made an important announcement with Microsoft. We agreed to a 10-year partnership to bring the GeForce NOW Microsoft line-up of Xbox PC games which includes blockbusters like Minecraft, Credo [Phonetic] and Flight Simulator. And upon the close of Microsoft’s Activision acquisition we will add titles Call of Duty and Overwatch.
Moving to Pro visualization, revenue of $226 million was up 13% sequentially and down 65% from a year-ago. Fiscal year revenue of $1.54 billion was down 27%. Sequential growth was driven by desktop workstations with strength in the automotive and manufacturing industrial verticals. The year-on-year decline reflects the impact of the channel inventory correction, which we expect to end in the first half of the year.
Interest in NVIDIA’s Omniverse continues to build with almost 300,000 downloads so far, 185 connectors to third-party design applications. The latest release of Omniverse has a number of features and enhancements, including support for 4K real-time path tracing, Omniverse search for AI-powered search through large untied 3D databases and Omniverse cloud containers for AWS.
Let’s move to automotive. Revenue was a record $294 million, up 17% and up 135% from a year ago. Sequential growth was driven primarily by AI, automotive solutions with program ramps at both electric vehicle and traditional OEM customers helped drive this growth. Fiscal year revenue of $903 million was up 16% [Phonetic]. At CES, we announced a strategic partnership with Foxconn to develop automated and autonomous vehicle platforms. This partnership will provide scale for volume manufacturing to meet growing demand for the NVIDIA DRIVE platform. Foxconn will use NVIDIA DRIVE, Hyperion compute and center architecture for its electric vehicles.
Foxconn will be a tier-one manufacturer producing electronic control units based on the NVIDIA DRIVE Orin for the global automotive OEM. We also reached an important milestone this quarter. The NVIDIA DRIVE operating system received safety certification from TUV SUD, one of the most experienced and rigorous assessment bodies in the automotive industry. With industry-leading performance and functional safety, our platform meets the higher standards required for autonomous transportation.
Moving to the rest of the P&L, GAAP gross margin was 63.3% and non-GAAP gross margin was 66.1%. Fiscal year GAAP gross margin was 56.9% and non-GAAP gross margin was 59.2%. Year-on year Q4 GAAP operating expenses were up 21% and non-GAAP operating expenses were up 23%, primarily due to the higher compensation and data center infrastructure expenses. Sequentially. GAAP operating expenses were flat and non-GAAP operating expenses were down 1%. We plan to keep them relatively flat at this level over the coming quarters.
Full year GAAP operating expenses were up 15% and non-GAAP operating expenses were up 31%. We returned $1.15 billion to shareholders in the form of share repurchases and cash dividends. At the end of Q4, we had approximately $7 billion remaining under our share repurchase authorization through December 2023.
Let me look to the outlook for the first quarter of fiscal ’24. We expect sequential growth to be driven by each of our four major market platforms. Led by strong growth in data center and gaming. Revenue is expected to be $6.5 billion, plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 64.1% and 66.5% respectively, plus or minus 50 basis-points.
GAAP operating expenses are expected to be approximately $2.53 billion. Non-GAAP operating expenses are expected to be approximately $1.78 billion. GAAP and non-GAAP other income and expenses are expected to be an income of approximately $60 million excluding gains and losses of non-affiliated investments. GAAP and non-GAAP tax rates are expected to be 13% plus or minus 1% excluding any discrete items.
Capital expenditures are expected to be approximately $350 million to $400 million for the first quarter. And in the range of $1.1 billion to $1.3 billion for the full fiscal year 2024. Further financial details are included in the CFO commentary and other information available on our IR website.
In closing, let me highlight upcoming events for the financial community. We will be attending the Morgan Stanley Technology Conference on March 6 in San Francisco and the Cowen Healthcare Conference on March 7 in Boston. We will also host GTC virtually which as you know is kicking off on March 21. Our earnings call to discuss the results of our first quarter of fiscal year ’24 is scheduled for Wednesday, May 24.
Now we will open up the call for questions. Operator, would you please poll for questions.
Questions and Answers:
Operator
[Operator Instructions]. Your first question comes from the line of Aaron Rakers with Wells Fargo. Your line is now open.
Aaron Rakers — Wells Fargo — Analyst
Yeah, thanks for taking the question. Clearly on this call, a key focal point is going to be the monetization of your software and cloud strategy. I think as we look at it, I think straight-up the enterprise AI software suite, I think priced at around $6,000 per CPU socket. I think you’ve got pricing metrics a little bit higher for the cloud consumption model. I’m just curious, Colette, how do we start to think about that monetization contribution to the company’s business model over the next couple of quarters relative to, I think in the past, you’ve talked like a couple of $100 million or so. Just curious if you can unpack that a little bit.
Colette Kress — Executive Vice President and Chief Financial Officer
So I’ll start and turn it over Jensen to talk more, because I believe this will be a great topic of discussion also at our GTC. Our plans in terms of our software, we continue to see growth. Even in our Q4 results we’re making quite good progress in both working with our partners, onboarding more partners and increasingly our software. You are correct, we’ve talked about our software revenue being in the hundreds of millions. And we’re getting even stronger each day as Q4 was probably a record level in terms of our software levels.
But there’s more to unpack in terms of there and I’m going to turn it to Jensen.
Jensen Huang — Founder, President and Chief Executive Officer
Yeah, first of all, taking a step back, you know NVIDIA. AI is essentially the operating system of AI systems today. It starts from data processing to learning training to validations to inference. And so this body of software is completely accelerated. It runs in every cloud, it runs on-prem. And it supports every framework, every model that we know of. And it’s accelerated everywhere.
By using NVIDIA AI your entire machine-learning operations is more efficient, and it is more cost-effective. You save money by using accelerated software. Our announcement today of putting NVIDIA infrastructure and have it be hosted from within the world’s leading cloud service providers accelerates the enterprises’ ability to utilize NVIDIA AI Enterprise. It accelerates people’s adoption of this machine-learning pipeline, which is not for the faint of heart. It is a very extensive body of software. It is not deployed in enterprises, broadly. But we believe that by hosting everything in the cloud, from the infrastructure through the operating system software, all the way through pre-trained models we can accelerate the adoption of generative AI and enterprises.
And so we’re excited about this new extended part of our business model. We really believe that it will accelerate the adoption of software.
Operator
Your next question comes from the line of Vivek Arya with Bank of America. Your line is now open.
Vivek Arya — Bank of America Merrill Lynch — Analyst
Thank you. Just wanted to clarify, Colette if you meant data center could grow on a year-on year basis also in Q1? And then Jensen, my main question, kind of related to smaller related ones. The computing intensity for generative AI if it is very high, does it limit the market size to just a handful of hyperscalers? And on the other extreme, if the market gets very large then doesn’t that attract more competition for NVIDIA from cloud ASICs or other accelerated options that are out there in the market?
Colette Kress — Executive Vice President and Chief Financial Officer
Hi, Vivek. Thanks for the question. First talking about our data center guidance provided for Q1, we do expect a sequential growth in terms of our data center, strong sequential growth and we are also expecting a growth year-over-year for our data center. We actually expect a great year, with year-over-year growth in data center probably accelerating past Q1.
Jensen Huang — Founder, President and Chief Executive Officer
Large language models are called large because they are quite large. However remember that we’ve accelerated and advanced AI processing by a million x over the last decade. Moore’s Law in its best days would have delivered a 100x in a decade. By coming up with new processors, new systems, new interconnects, new frameworks and algorithms and working with data scientists, AI researchers on new models, across that entire span we’ve made large language model processing a million times faster, a million times faster.
What would have taken a couple of months in the beginning now happens in about 10 days. And of course, you still need a large infrastructure. And even the large infrastructure we’re introducing Hopper which whether it’s transformer engine. It’s new and the Lynx switches and it’s new InfiniBand 400 gigabits per second data rates. We are able to take another leap in the processing of large language models.
And so, I think by putting NVIDIA’s DGX supercomputers into the cloud with NVIDIA DGX cloud, we’re going to democratize the access of this infrastructure with accelerated training capabilities, really make this technology and this capability quite accessible. So that’s one thought.
The second is the number of large language models or foundation models that have to be developed is quite large. Different countries with different cultures and its body of knowledge are different. Different fields, different domains whether it’s Imaging or it’s biology or it’s physics. Each one of them meet their own domain foundation models. With large language models, of course we now have a prior that could be used to accelerate the development of all these other fields, which is really quite exciting.
The other thing to remember is that the number of companies in the world have their own proprietary data. The most valuable data in the world are proprietary. And they belong to the company. It’s inside their company. It will never leave the company. And that body of data will also be harnessed to train new AI models for the very first time.
And so we — our strategy and our goal is to put the DGX infrastructure in the cloud, so that we can make this capability available to every enterprise, every company in the world who would like to create proprietary data and — proprietary models. Now.
The second thing about competition, we’ve had competition for a long time. Our approach, our computing architecture, as you know is quite different on several dimensions. Number one it is Universal, meaning you could use it for training, you can use it for inference. You can use it for models of all different types. It supports every framework. It supports every cloud. It’s everywhere. It’s cloud to private cloud, cloud to on-prem. It’s all the way out to the edge. It could be an autonomous system.
This one architecture allows developers to develop their AI models and deploy it everywhere. The second very large idea is that no AI in itself is an application. There’s a pre-processing part of it and a post-processing part of that to turn it into a application or service. Most people don’t talk about the pre and post-processing because it’s maybe not as sexy and not as interesting. However, it turns out the pre-processing and post-processing oftentimes consumes half or two-thirds of the overall workloads.
And so by accelerating the entire end-to-end pipeline from pre-processing, from data ingestion and data processing all the way to the pre-processing all the way to post-processing, we’re able to accelerate the entire pipeline versus just accelerating half of the pipeline. The limit to speed-up, even if you’re instantly past if you only accelerate half of the workload is twice as fast. Whereas if you accelerate the entire workload, you could accelerate the workload may be 10, 20, 50 times faster, which is the reason why when you hear about NVIDIA accelerating applications you routinely hear 10x, 20x, 50x speed-up.
And the reason for that is because we accelerate things and not just the deep learning part of it, but using CUDA to accelerate everything from end-to-end. And so. I think the universality of our of our computing, accelerated computing platform, the fact that we’re in every cloud, the fact that we are from cloud to edge makes our architecture really quite accessible and very differentiated. In this way and most importantly, to all the service providers because of the utilization is so high, because you can use it to accelerate the end-to-end workload and get such good throughput our architecture is the lowest operating cost.
It’s not — the comparison is not even close. So those are the two answers.
Operator
Your next question comes from the line of C. J. Muse with Evercore. Your line is now open.
C.J. Muse — Evercore — Analyst
Yeah, good afternoon and thank you for taking the question. I guess Jensen, you talked about ChatGPT as an inflection point kind of like the I property [Phonetic]. So curious part A, how have your conversations evolved post ChatGBT with hyperscale and large-scale enterprises? And then secondly, as you think about Hopper with the Transformer Engine and Grace with high-bandwidth memory, how is kind of your outlook for growth for those two product cycles evolved in the last few months? Thanks so much.
Jensen Huang — Founder, President and Chief Executive Officer
ChatGPT Is a wonderful piece of work, and the teams have done a great job, Open AI did a great job with it. They stuck with it and the accumulation of all of the breakthroughs led to a service with a model inside that surprised everybody with its versatility and its capability. What people were surprised by, and this is in our — close within the industry is well understood. But the surprising capability of a single AI model that can perform tasks and skills that it was never trained to do.
And for this language model to not just speak English, or can translate of course but not just speak human language, it can be prompted in human language but output, pipeline output COMO [Phonetic] a language that very few people even remember output. Python for Blender, a 3D program. So it’s a program that writes a program for another program. We now realize — the world now realizes that maybe human language is a perfectly good computer programming language.
And that we democratize computer programming for everyone, almost anyone who could explain in human language a particular task to be performed, this new computer — this — when I say a new era of computing this new computing platform, this new computer that could take whatever your prompt is whatever your human explained requested and translate to a sequence of instructions that he processes directly or wait for you to decide whether you want to process that are not.
And so this type of computer is utterly revolutionary in its application because it’s democratized programming to so many people really has excited enterprises all over the world. Everything every single CSP every single Internet service provider, and frankly, every single software company because of what I just explained, that this is a AI model that can write a program for any program. Because of that reason everybody who develops software is either alerted or shocked into alert, or actively working on something that is like ChatGPT to be integrated into their application or integrated into their service.
And so this is as you can imagine utterly worldwide. The activity around the AI infrastructure that we build, Hopper and the activity around inferencing using Hopper and Ampere to influence large language models has just gone through the roof. In the last 60 days. And so there is no question that whatever our views are of this year, as we entered the year has been fairly dramatically changed as a result of the last 60, 90 days.
Operator
Your next question comes from the line of Matt Ramsay with Cowen and Company. Your line is now open.
Matt Ramsay — Cowen — Analyst
Thank you very much. Good afternoon. Jensen I wanted to ask a couple of questions on the DGX cloud. And I guess we’re all talking about the drivers of the services and the compute that you’re going to host on top of these services with the different hyperscalers. But. I think we’ve been. Kind of watching and wondering when your data center business might transition to more of a systems-level business meaning pairing InfiniBand with your Hopper product with your Grace product and selling thing more on a systems level.
I just wonder if you could step-back over the next two or three years, how do you think the mix of business in your data center segment evolves from maybe selling cards to systems and software and what can that mean for the margins of that business over time. Thank you.
Jensen Huang — Founder, President and Chief Executive Officer
Yeah. I appreciate the question. First of all, as you know, our data center business is our GPU business only in the context of a conceptual GPU. Because what we actually sell to the cloud service providers is a panel of fairly large computing panel of eight. Poppers or eight Hoppers or eight Ampere’s that are — that’s connected with Lynx switches that are connected with Dlink [Phonetic]. And so this board represents essentially one GPU. It’s eight chips connected together into one GPU with a very high speed chip to chip interconnect.
And so we’ve been working on, if you will, multi-die computers for quite some time. And that is one GPU. So when we think about the GPU, we actually think about it HDX GPU and that’s eight GPUs. We’re going to continue to do that and the thing that the cloud service providers are really excited about is by hosting our infrastructure for NVIDIA to offer, because we have so many companies that we work directly with. We’re working directly with 10,000 AI start-ups around the world, with enterprises in every industry. And all of those relationships today would really love to be able to deploy both into the cloud at least or into the cloud and on-prem, and often times multi-cloud.
And so by having NVIDIA DGX, NVIDIA’s infrastructure, our full stack in their cloud we’re effectively attracting customers to the CSPs. This is a very, exciting model for them. And they welcomed us with open arms and we are, we’re going to be the best AI sales people for the world’s clouds and. And for the customers, they now have an instantaneous infrastructure that is the most advanced. They have a team of people who are extremely good from the infrastructure to the acceleration software, NVIDIA AI open, operating system all the way up to AI models within one entity they have access to expertise across that entire span.
And so this is a great model for customers, it’s a great model for CSPs. It’s a great model for us. Let’s us really run like the wind as much as we will continue and continue to advance VGX AI supercomputers. It does take time to build AI supercomputers on-prem. It’s hard, no matter how you look at it. It takes time no matter how you look at it. And so now we have the ability to really pre-fetch a lot of that and get customers up and running as fast as possible.
Operator
Your next question comes from the line of Timothy Arcuri with UBS. Your line is now open.
Timothy Arcuri — UBS — Analyst
Thanks a lot. Jensen I had a question about what this all goes to your TAM. Most of the focus right now is on tax, but obviously there are companies doing a lot of trading on video and music. They’re working on models there and it seems like somebody who is training these big models has maybe on the high end, at least 10,000 GPUs in the cloud that they’ve contracted and maybe tens of thousands of more to entrenched widely deployed model.
So it seems like the incremental TAM is easily in the several hundreds, thousands of GPUs and easily in the tens of billions of dollars, but I’m kind of wondering what this does to the TAM numbers you gave last year. I think you said $300 billion hardware TAM and $300 billion software. So how do you kind of think about what the new team would be? Thanks.
Jensen Huang — Founder, President and Chief Executive Officer
I think those numbers are really good, good anchor still. The difference is because of the, if you will, incredible capabilities and versatility of generative AI, and all of the converging breakthroughs that happened towards the middle and end of last year. We were probably going to arrive at that TAM sooner than later. There’s no question that this is a very big moment for the computer industry.
Every single platform change, every inflection point and the way that people develop computers happened because it was easier to use, easier to program and more accessible. This happened with the PC revolution. This happened with the Internet revolution, this happened with mobile cloud. Remember, mobile cloud, because of the iPhone and the app store, 5 million applications and counting emerged. But they weren’t 5 million mainframe applications, there weren’t. 5 million workstation applications, they weren’t 5 million PC applications.
And because it was so easy to develop and deploy amazing applications part cloud, part on a mobile device, and so easy to distribute because of app stores. The same exact thing is now happening to AI. In no computing era did one computing platform. ChatGPT, reached 150 million people in 60 to 90 days. I mean, this is quite an extraordinary thing. And people are using it to create all kinds of things. And so I think that what you’re what you’re seeing now is just a torrent of new companies and new applications that are emerging. There’s no question this is in every way a new computing era.
And so I think the TAM that we explained/expressed, it really is even more realizable today in sooner than before.
Operator
Your next question comes from the line of Stacy Rasgon with Bernstein. Your line is now open.
Stacy Rasgon — Bernstein — Analyst
Hey, guys, thanks. Hi guys, thanks for taking my questions. I have a clarification and then a question, both for Colette. The clarification, you said H100 revenue higher than A100. Was that an overall statement, or was that at the same point in time like after two quarters of shipments? And then for my actual question, I wanted to ask about auto, specifically the Mercedes opportunity.
Mercedes had an event today. And they were talking about software revenues for their NB drive that could be no single-digit low billion euros by mid decade and billion euros by the end-of-the decade and. I know you guys were supposedly splitting the software revenue 50-50. Is that kind of the order of magnitude of software revenues from the Mercedes deal that you guys are thinking of an overlap similar timeframe, is that how we should be modeling that. Thank you. Great, thanks.
Colette Kress — Executive Vice President and Chief Financial Officer
Thanks Stacy for the question. Let me first start with your question you had about H100 and A100. We’ve begun initial shipments of H100 back-in Q3. It was a great start. Many of them began that process many quarters ago and this was the time for us to get production level to them in Q3. So Q4 was an important time for us to see a great brand H100 that we saw. But that means our H100 was the focus of many of our CSPs within Q4 and they were all wanting to get those up and running in cloud instances. And so we actually saw less of A100 in Q4 of what we saw in H100 in a larger amount.
We intend to continue to sell both architectures going-forward, but just in Q4. It was a strong quarter for us. Your additional questions that you had on Mercedes-Benz, I’m very pleased with the joint connection that we have with them and the work we’ve been working very diligently both getting ready to come to-market. But you’re right, they didn’t talk about the software opportunity it talked about the software opportunity in two phases, about what they described as well as what they can also do with connect.
They extended out to a position of probably about 10 years, looking at the opportunity that they see in front of us. So in line with what our thoughts with a long-term partner like that and sharing that revenue.
Jensen Huang — Founder, President and Chief Executive Officer
Yeah, one of the things that if I could add Stacy that say something about the wisdom of what Mercedes is doing. This is the only large luxury brand that has across-the-board from every — from the entry, all the way to the highest-end of luxury cars to install every single one of them with a rich sensors set, every single one of them with an AI supercomputer, so that every future car in the Mercedes fleet will contribute to an installed-base that could be upgradable and forever renewed for customers going forward.
If you could just imagine. What it looks like if the entire Mercedes fleet that is on the road today, we’re completely programmable that you can OTA, it would represent tens of millions of Mercedes that would represent revenue generating opportunity. And that’s the vision that it has and what they’re building towards, I think it’s going to be extraordinary. The large installed base of luxury cars, that will continue to renew with four customers benefits and also for revenue-generating benefits.
Operator
Your next question comes from the line of Mark Lipacis with Jefferies. Your line is now open.
Mark Lipacis — Jefferies & Co. — Analyst
Hi, thanks for taking my question. I think for you, Jensen, it seems like every year a new workload comes out and drives demand for your process, or your ecosystem cycles. And if I think back facial recognition and then recommendation engines, natural language processing and Omniverse and now generative AI engines. Can you share with us your view is this what we should expect going forward like a brand new workload that drives demand to the next level for your products of? And the reason I ask is because I found it interesting, your comments in your script, where you mentioned that your kind of view about the demand that generative AI is going to drive for your products are and now services, it seems to be a lot better than what you thought just over the last 90 days or so.
And to the extent that there new workloads that you’re working on or new applications that can drive next levels of demand, would you would you care to share with us a little bit of what you think could drive past what you’re seeing today. Thank you.
Jensen Huang — Founder, President and Chief Executive Officer
Yeah, Mark. I really appreciate the question. First of all I have new applications that you don’t know about, and new workloads that we’ve never shared, that I would like to share with you at GTC. And so, that’s my hope to come to GTC and I think you’re going to be very surprised and quite delighted by the applications that we’re going to talk about.
Now there’s a reason why — there’s a reason why it is the case that you’re constantly hearing about new applications. The reason for that is number-one NVIDIA is a multi-domain accelerated computing platform. It is not completely general purpose, it’s like a CPU, because a CPU is 95%, 98% control functions and only 2% mathematics, which makes it completely flexible.
We’re not that way. We’re an accelerated computing platform that works with the CPU that offloads the really heavy computing units, things that could be highly paralyzed to offload that. But we’re multi-domain. We could we could do particle systems, we could do fluids, we can do neurons and we can do computer graphics, we can you lays [Phonetic]. There all kinds of different applications that we can accelerate, number-one.
Number two, our installed base is so large. This is the only accelerated computing platform, the only platform, literally, the only one that is architecturally compatible across every single cloud from PCs, to workstations, gamers to cars, and to on-prem. Every single computer is architecturally compatible, which means that a developer who develops something special would seek out our platform, because they like the reach. They like the universal reach that — they like the acceleration, number one. They like the ecosystem of programming tools and the ease of using it and the fact that they have so many people who can reach-out to help them.
There are millions of CUDA experts around the world, software all accelerated, tool all accelerated and then very importantly, they like the reach. They like the fact that you can see — they can reach so many users after they develop the software. And it is the reason why we just keep attracting new applications. And then finally, this is a very important point. Remember the rate of CPU computing advance has slowed tremendously. And whereas back-in the first 30 years of my career It 10x in performance at about the same power. Every five years, 10x every five years. That rate of continued advance has slowed. At a time when people still have really, really urging applications that they would like to bring to the world and they can’t afford to do that with the power keep going up.
Everybody needs to be sustainable, you can’t continue to consume power. By accelerating it, we can decrease the amount of power you use for any workload. And so all of these multitude of reasons is really driving people to use accelerated computing and we keep discovering new exciting applications.
Operator
Your next question comes from the line of Atif Malik with Citi. Your line is now open.
Atif Malik — Citi — Analyst
Hi, thank you for taking my question. Colette I have a question on data center. You saw some weakness on build plans in the January quarter, but you’re guiding to year-over-year acceleration in April and through the year. So if you can just rank order for us the confidence in the acceleration. Is that based on your H100 ramp or regenerative AI sales coming through or the new AI services model? And also if you can talk about what you’re seeing on the enterprise vertical?
Colette Kress — Executive Vice President and Chief Financial Officer
So thanks for the question. When we think about our growth, yes, we’re going to grow sequentially in Q1. And do expect a year-over-year growth in Q1 as well. We will likely accelerate there going-forward. So what do we see as the drivers of that? Yes, we have multiple product cycles coming to market. We have H100 in market now. We are continuing with our note launches as well, that are sometimes fueled with our GPU computing with our networking. And then we have Grace coming likely in the second half.
Additionally generative AI has sparked interest definitely among our customers, whether those be CSPs, whether those be — whether those be start-ups. We expect that to be a part of our revenue growth this year. And then lastly let’s just not forget, given the end of Moore’s Law, there’s an error here focusing on AI, focusing on accelerated computing. So as the economy improves this is probably very important to the other presence. And it can be fueled by the existence of cloud first, for the enterprises, let’s say.
I’m going to turn it to Jensen to see if he has any additional points he wants to add.
Jensen Huang — Founder, President and Chief Executive Officer
No, I think you did great. That was good.
Operator
Your last question today comes from the line of Joseph Moore with Morgan Stanley. Your line is now open.
Joseph Moore — Morgan Stanley — Analyst
Hey, thank you. Jensen you talked about this one million times improvement in your ability to train these models over the last decade. Can you give us some insight into what that looks like in the next few years, and to the extent that some of your customers with these large language models are talking about 100x the complexity over that kind of time frame. I know Hopper six x better transformer performance. But what can you do to scale that up? And how much of that just reflects that it’s going to be a much larger hardware expense down the road.
Jensen Huang — Founder, President and Chief Executive Officer
First of all, I’ll start backwards. I believe the number of AI infrastructures are going to grow all over the world. And the reason for that is this AI, the production of intelligence is going to be manufacturing. There was a time when people manufactured just physical goods. In the future, there will be — there’ll be almost every company will manufacturer soft goods. It just happens to be in the form of intelligence.
Data comes in, that data center does exactly one thing and one thing only. It cranks on that data and it produces a new updated model. Where raw material comes in, a building or infrastructure cranks on it and something refined or improved comes out. That is of great value. That’s called a factory. And so I expect to see AI factories all over the world. Some of it will be hosted in cloud. Some of it will be on-prem. There’ll be some that are large, there will be some that will be mega large and then there’ll be some that are smaller.
And so I fully expect that to happen, number one. Number two, over the course of the next 10 years, I hope through new chips, new interconnects, new systems, new operating systems, new distributed computing algorithms and new AI algorithms, and working with developers coming up with new models. I believe we’re going to accelerate AI by another million x. There’s a lot of ways for us to do that and that’s one of the reasons why NVIDIA is not just a chip company because the problem we’re trying to solve is just too complex.
You think across the entire stack. All the way from the chip, all the way into the data center across the network through the software and in the in the mind of one single company, we can think across that entire stack. And it’s really quite a great playground for computer sciences for that reason, now because we can innovate across that entire stack.
So my expectation is that you’re going to see really gigantic breakthroughs in AI models in the next company — the AI platforms in the coming decade. But simultaneously because of the incredible growth and adoption of this, you can see these factories everywhere.
Operator
This concludes our Q&A session. I will now turn the call-back over to Jensen Huang for closing remarks.
Jensen Huang — Founder, President and Chief Executive Officer
Thank you. The cumulation of breakthroughs from transformers, large language model and generative AI has elevated the capability and versatility of AI to a remarkable level.
A new computing platform has emerged. New companies, new applications, and new solutions to long standing challenges are being invented at an astounding rate. Enterprises in just about every industry are activating to apply generative AI to reimagine their products and businesses. The level of activity around AI, which was already high, has accelerated significantly.
This is the moment, we’ve been working towards for over a decade. And we are ready. Our Hopper AI supercomputer with a new transformer engine and Quantum InfiniBand fabric is in full production and CSPs are racing to open their Hopper cloud services. As we work to meet the strong demand for our GPUs we look forward to accelerating growth through the year.
Don’t miss the upcoming GTC. We have much to tell you about new chips, systems and software, new CUDA applications and customers, new ecosystem partners and a lot more on NVIDIA AI and Omniverse. This will be our best GTC yet. See you there.
Operator
This concludes today’s conference. You may now disconnect.