Categories AlphaGraphs, Earnings, Technology

NVIDIA Corporation (NVDA) Q1 2024 Earnings Call Transcript

NVDA Earnings Call - Final Transcript

NVIDIA Corporation (NASDAQ:NVDA) Q1 2024 Earnings Call dated May. 24, 2023

Corporate Participants:

Simona Jankowski — Vice President, Investor Relations

Colette Kress — Executive Vice President, Chief Financial Officer

Toshiya Hari — Goldman Sachs — Analyst

Jensen Huang — Founder and Chief Executive Officer

Analysts:

CJ Muse — Evercore ISI — Analyst

Aaron Rakers — Wells Fargo — Analyst

Timothy Arcuri — UBS — Analyst

Stacy Rasgon — Bernstein — Analyst

Joseph Moore — Morgan Stanley — Analyst

Harlan Sur — JP Morgan — Analyst

Matt Ramsay — Cowen and Company — Analyst

Presentation:

Operator

Good afternoon, my name is David, and I’ll be your conference operator today. At this time, I like to welcome everyone to NVIDIA’s First Quarter Earnings Call. Today’s conference is being recorded. [Operator Instructions]. Thank you.

Simona Jankowski, you may begin your conference.

Simona Jankowski — Vice President, Investor Relations

Thank you. Good afternoon, everyone, and welcome to NVIDIA’s conference call for the first quarter of fiscal 2024. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer.

I’d like to remind you that our call is being webcast live on NVIDIA’s Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the second quarter of fiscal 2024. The content of today’s call is NVIDIA’s property, it can’t be reproduced or transcribed without our prior written consent.

During this call we may make forward-looking statements based on current expectations, these are subject to a number of significant risks and uncertainties and our actual results may differ materially. For a discussion of factors that could affect our future financial results in business, please refer to the disclosure in today’s earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission.

All our statements are made as of today, May 24th, 2023, based on information currently available to us. Except as required by-law, we assume no obligation to update any such statements. During this call we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. And with that, let me turn the call over to Colette.

Colette Kress — Executive Vice President, Chief Financial Officer

Thanks, Simona. Q1 revenue was $7.19 billion, up 19% sequentially and down 13% year-on-year. Strong sequential growth was driven by record datacenter revenue with our gaming and professional visualization platforms emerging from channel inventory corrections. Starting with data center, record revenue of $4.28 billion was up 18% sequentially and up 14% year-on-year, a strong growth. that accelerated computing platform worldwide. Generative AI is driving exponential growth in compute requirements and a fast transition to NVIDIA accelerated computing, which is the most versatile, most energy-efficient and the lowest TCO approach to train and deploy AI.

Generative AI drove significant upside in-demand for our products, creating opportunities and broad-based global growth across our markets. Let me give you some color, across our three major customer categories, cloud service providers or CSPs consumer Internet companies and enterprises. First CSPs around the world are racing to deploy our flagship Hopper and Ampere architecture GPUs to meet the surge in interest from both enterprise and consumer AI applications for training and inference. Multiple CSPs announced the availability of H100 on their platforms, including private previews and Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure. Upcoming offerings at AWS and general availability at emerging GPU specialized cloud providers like CoreWeave and Lamda.

In addition to enterprise AI adoption these CSPs are setting strong demand for our H100 from Generative AI pioneers.

Second, consumer Internet companies are also at the forefront of adopting Generative AI and deep learning-based recommendation systems, driving strong growth. For example, Meta has now deployed, it’s H100 powered grand Teton AI supercomputer for it’s AI production and research teams.

Third enterprise demand for AI and accelerated computing is strong. We are seeing momentum in verticals such as automotive, financial services, healthcare and telecom, where AI and accelerated computing are quickly becoming integral to customers and innovation[Phonetic] roadmaps and competitive positioning. For example, Bloomberg announced it has a $50 billion parameter model, Bloomberg GPT to help with financial, natural language processing tasks such as sentiment analysis, named entity recognition, nearest classification and question and answering.

Auto Insurance company CCC intelligence solutions is using AI for estimating repairs, and AT&T is working with us on AI to improve fleet dispatches so their field technicians can better serve customers. Among other enterprise customers using NVIDIA AI are Deloitte for logistics and customer service and Amgen for drug discovery and protein engineering.

This quarter we started shipping DGX H100 our Hopper-generation AI system which customers can deploy on prem and with the launch of DGX Cloud through our partnership with Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure, we deliver the promise of NVIDIA DGX to customers from the cloud. Whether the customer deploy DGX on-prem or via DGX Cloud, they get access to NVIDIA AI software including NVIDIA Base Command and AI frameworks and pretrained models. We provide them with the blueprint for building and operating AI, expanding our expertise across systems, algorithms, data processing and training methods. We also announced NVIDIA AI Foundations which are model foundry services available on DGX Cloud that enables businesses to build, refine and operate custom large language model and generative AI models trained with their own proprietary data. Credit for unique domain-specific tasks, they include NVIDIA NeMo for large language models, NVIDIA Picasso for images, video and 3D and NVIDIA BioNeMo for life sciences.

Each service has it’s elements pretrained models, frameworks for data processing and curation, proprietary knowledge base databases, systems for fine-tuning aligning and bar railing optimized inference engines and support from NVIDIA experts to help enterprises fine-tuned models for their customer use cases. Service now a leading enterprise services platform is an early adopter of DGX Cloud and NeMo, they are developing custom large language models trained on data specifically for the ServiceNow platform. Our collaboration will let ServiceNow create new enterprise-grade Generative AI offerings, with the thousands of enterprises worldwide running on the ServiceNow platform including for IT departments, customer service teams, employees and developers.

Generative AI is also driving a step-function increase in inference platforms, because of their size and complexity these workflows require acceleration. While the latest MLPerf industry benchmarks released in April, showed NVIDIA’s inference platforms deliver performance, that is orders of magnitude ahead of the industry with unmatched versatility across diverse workloads. To help customers deploy Generative AI applications at-scale at GTC, we announced four major new inference platforms that leverage the NVIDIA AI software stack. These include L4 Tensor core GPU for AI video, L40 for Omniverse and graphics rendering, H100 NVL for large language models, and the Grace Hopper Superchip for LLMs and also recommendation systems and vector databases.

Google Cloud is the first CSP to adopt our L4 inference platform with the launch of its G2 virtual machines for generative AI inference and other workloads, such as Google Cloud dataproc, Google Alphafold and Google Cloud Immersive Stream which render 3D and AR experiences. In addition, Google is integrating our Triton infrence server with Google Kubernetes engine and it’s cloud-based Vertex AI platform.

In networking, we saw strong demand at both CSPs and enterprise customers for generative AI and accelerated computing, which require high-performance networking like NVIDIA’s Mellanox networking platforms. Demand relating to general-purpose CPU infrastructure remained soft. As generative AI applications, grow in size and complexity, high-performance networks become essential for delivering accelerated computing a datacenter scale to meet the enormous demand for both training and inferencing. Our 400 gig quantum two InfiniBand platform is the gold standard for AI dedicated infrastructure. With broad adoption across major cloud and consumer Internet platforms such as Microsoft Azure.

With the combination of in-network computing technology and the industry’s only end-to-end datacenter scale, optimized software stock customers routinely enjoy a 20% increase in throughput for their sizable infrastructure investment. For multi-tenant cloud transitioning to support generative AI our high-speed Ethernet platform with Bluefield-3 DPUs and Spectrum 4 Ethernet switching offers the highest available Ethernet network performance. Bluefield-3 is in-production and has been adopted by multiple hyperscale and CSP customers including Microsoft Azure, Oracle Cloud, [Indecipherable] Baidu and others. We look-forward to sharing more about our 400 gig spectrum for accelerated AI networking platform next week at the Computex Conference in Taiwan.

Lastly, our Grace data center CPU is sampling with customers. At this week’s International Supercomputing Conference in Germany, the University of Bristol, announced a new supercomputer based on NVIDIA Grace CPU Superchip which six times more energy-efficient than the previous supercomputer. This adds to the growing momentum for Grace with both CPU only and CPU, GPU opportunities across AI and cloud and supercomputing applications. The coming wave of BlueField-3, Grace and Grace Hopper Superchips will enable a new-generation of super energy-efficient accelerated data centers.

Now let’s move to gaming. Gaming revenue of $2.24 billion was up 22% sequentially and down 38% year-on-year. Strong sequential growth was driven by sales of the 40 Series GeForce RTX GPUs for both notebooks and desktops. Overall end demand was solid and consistent with seasonality, demonstrating resilience against a challenging consumer spending backdrop. The GeForce RTX 40 Series GPUs laptops are off to a great start, featuring for NVIDIA inventions, RTX path tracing, DLSS-3 AI rendering, reflects ultra-low latency rendering and Max-Q energy-efficient technologies. They deliver tremendous gains in industrial design performance and battery life for gamers and creators.

Unlike our desktop offerings 40 Series laptops, support the NVIDIA Studio platform or software technologies, including acceleration for creative, data science and AI Workflows and Omniverse getting content creators unmatched tools and capabilities. In desktop, we ramped the RTX 4070 which joined the previously launched RTX 4090 4080 and the 4070 TI GPUs. The RTX 4070 is nearly three times faster than the RTX 2070 and offers our large installed-base a spectacular upgrade. Last week, we launched the 60 family RTX, 4060 and 4060 TI, bringing our newest architecture to the world’s core gamers starting at just $299. These GPUs, for the first time provide two times the performance to the latest gaming console at mainstream price points. The 4060 TI is available starting today, but 4060 will be available in July. Generative AI will be transformative to gaming and content creation from development to runtime. At the Microsoft Build developer conference earlier this week, we showcased how Windows PCs and work stations with NVIDIA RTX GPUs will be AI powered [Indecipherable].

NVIDIA and Microsoft have collaborated on end-to-end software engineering, spanning from the Windows operating system to the NVIDIA graphics drivers and NeMo LLM framework to help make Windows on NVIDIA RTX Tensor Core GPUs a supercharge platform for generative AI. Last quarter we announced a partnership with Microsoft to bring Xbox PC games to GeForce NOW, the first game from this partnership Gears-5 is now available with more set to be released in the coming months. There are now over 1,600 games on GeForce NOW the richest content available on any cloud gaming service.

Moving to Pro Visualization, revenue of $295 million was up 31% sequentially and down 53% year-on-year. Sequential growth was driven by stronger workstation demand across both mobile and desktop form factors with strength in key verticals such as public sector healthcare and automotive. We believe the channel inventory correction is behind us. The ramp of our Ada Lovelace GPU architecture in workstations kicked-off a major product cycle. At GTC, we announced six new RTX GPUs for laptops and desktop workstations with further rollout planned in the coming quarters. Generative AI is a major new workload for NVIDIA powered workstation, our collaboration with Microsoft transformed windows into the ideal platform for creators and designers harnessing Generative AI to elevate their creativity and product scaling[Phonetic].

At GTC we announced NVIDIA Omniverse cloud and NVIDIA fully managed service running in Microsoft Azure, that includes the full suite of Omniverse applications and NVIDIA OVX infrastructure. Using this full stack cloud environment, customers can design, develop, deploy and manage industrial metaverse applications. NVIDIA Omniverse cloud will be available starting in the second half of this year. Microsoft NVIDIA will also connect Office 365 applications with Omniverse. Omniverse cloud is being used by companies to digitalities their workflows from design and engineering to smart factories and 3D content generation from our pool[Phonetic]. The automotive industry has been a leading early adopter of our Omniverse, including companies such as BMW Group, Geely Lotus, General Motors, and Jaguar Land Rover.

Moving to automotive. Revenue was $296 million, up 1%, sequentially and up 14% from a year-ago. Our strong year-on-year growth was driven by the ramp of the NVIDIA Drive Orin across a number of new energy vehicles. As we announced in March, our automotive design-win pipeline over the next six years, now stands at $14 billion up from $11 billion a year-ago, giving us visibility into continued growth over the coming years. Sequentially, growth moderated as some NEVs customers in China our adjusting their production schedules to reflect slower-than-expected demand growth. We expect this dynamic to linger for the most of the calendar year. During the quarter, we expanded our partnership with BYD, the world’s leading manufacturer of NEVs, our new design-win will extend BYD’s use of the Drive Orin to its next-generation high-volume Dynasty and Ocean series of their cars, set to start production in calendar 2024.

Moving to the rest of the P&L, GAAP gross margins were 64.6%, non-GAAP gross margins were 66.8%, gross margins have now largely recovered to prior peak levels, as we have absorbed higher costs and offset them by innovating and delivering higher-valued products as well as products incorporating more-and-more software. Sequentially. GAAP operating expenses were down 3% and non-GAAP operating expenses were down 1%. We’ve held OpEx at roughly the same level over the last past four quarters, we’re working through the inventory corrections in gaming and professional visualization. We now expect to increase investments in the business while also delivering operating leverage. We returned $99 million to shareholders in the form of cash dividends at the end of Q1, we have approximately $7 billion remaining under our share repurchase authorization through December 2023.

Let me turn to the outlook for the second quarter of fiscal ’24. Total revenue is expected to be $11 billion, plus or minus 2%, we expect this sequential growth to largely be driven by data center reflecting a steep increase in-demand related to Generative AI and large language models. This demand has extended our data center visibility out a few quarters and we have procured substantially higher supply for the second half of the year. GAAP and non-GAAP gross margins are expected to be 68.6% and 70%, respectively, plus or minus 50 basis-points. GAAP and non-GAAP operating expenses are expected to be approximately $2.71 billion and $1.9 billion respectively. GAAP and non-GAAP, other income and expenses are expected to be an income of approximately $90 million excluding gains and losses from non-affiliated Investments. GAAP and non-GAAP tax rates are expected to be 14% plus or minus 1% excluding any discrete items. Capital expenditures are expected to be approximately $300 million to $350 million.

Further financial details are included in the CFO commentary and other information available on our IR website. In closing, let me highlight some of the upcoming event, Jensen will give the Computex keynote address in-person in Taipei this coming Monday, May 29th, local time which will be Sunday evening in the US. In addition, we will be attending the BofA Global Technology Conference in San Francisco on June 6th, and Rosenblatt Virtual Technology Summit on the age of AI, on June 7th and the New Street future of Transportation Virtual Conference on June 12th. Our earnings call to discuss the results of our second quarter fiscal ’24 is scheduled for Wednesday, August 23rd.

Well, that covers our opening remarks, we’re now going to open the call for questions. Operator, would you please poll for questions.

Questions and Answers:

Operator

Thank you. [Operator Instructions]. We’ll take our first question from Toshiya Hari with Goldman Sachs. Your line is open.

Toshiya Hari — Goldman Sachs — Analyst

Hi, good afternoon. Thank you so much for taking the question and congrats on the strong results and incredible outlook. Just one question on datacenter, Colette, you mentioned the vast majority of the sequential increase in revenue this quarter will come from data center. I was curious what the construct is there, if you can speak to what the key drivers are from April to July and perhaps more importantly, and you talked about visibility into the second half of the year. I’m guessing it’s more of a supply problem, at this point, what kind of sequential growth beyond the July quarter can your supply-chain support at this point? Thank you.

Colette Kress — Executive Vice President, Chief Financial Officer

Okay, so a lot of different questions there. So let me see if I can start and I’m sure Jensen will have some following our comments. So when we talk about our sequential growth were expected between Q1 and Q2, our generative AI large language models are driving this surge and demand and it’s broad-based across both our consumer Internet companies. our CSPs, our enterprises and our AI start-ups. It is also interest in both of our architecture, both of our Hopper latest architecture as well as our Ampere architecture. This is not surprising as we generally often sell both of our architecture is at the same time, this is also a key area where deep recommenders[Phonetic] are driving growth, and we also expect to see growth both in our computing as well as in our networking business. So those are some of the key things that we have picked-in when we think about the guidance that we’ve provided to Q2.

We also surfaced in our opening remarks that we are working on both supply today for this quarter, but we have also procured a substantial amount of supplier for the second half. We have some significant supply-chain flow to serve our significant customer demand that we see and this is demand that we see across a wide range of different customers, they are building platforms for some of the largest enterprises but also selling things up at the CSPs and the large consumer internet companies. So we have visibility right now for our data center demand that is probably extended out a few quarters and this led us to working on quickly procuring that substantially supply for the second half.

I’m going to pause there and see if Jensen wants to add a little bit more.

Jensen Huang — Founder and Chief Executive Officer

I thought it was great Colette[Phonetic]. Thank you.

Operator

Next we’ll go to CJ Muse with Evercore ISI. Your line is open.

CJ Muse — Evercore ISI — Analyst

Yeah, good afternoon and thank you for taking the question. I guess with data center you are essentially doubling quarter-on-quarter, two natural kind of questions that relate to one another, come to mind, number-one; Where are we in terms of driving acceleration into servers to support AI and as part of that as you deal with longer-cycle times with TSMC and your other partners, how are you thinking about managing their commitments there with where you want to manage your lead times, you in the coming years. to best kind of best match that supply-and-demand. Thanks so much.

Jensen Huang — Founder and Chief Executive Officer

Yes, CJ, thanks for the question. I’ll start backwards, the — you’d remember we worked in-full production both and Ampere and Hopper. When the ChatGPT moment came and it helped everybody crystallize how to transition from the technology of large language models to a product and service based on chatbot. The integration of guardrails and alignment systems were [Indecipherable] learning human feedback knowledge vector data bases for proprietary knowledge, their connection to search, all of that came together in a really wonderful way and It — the reason why I call it the iPhone moment all the technology came together and helped everybody realized what an amazing product that can be and what capabilities it can have. And so we were already in-full production. Envious supply-chain flow and our supply-chain is very significant as you know, and we build supercomputers in volume and these are giants systems and we’ve built them in volume. It includes of course the GPUs, but on our GPUs the system boards and 35,000 other components, and the networking and fiber-optics, and the incredible transceivers and the NICs, the Smart NICs, the switches all of that has to come together in order for us to stand-up a data center.

And so we were already in-full production and when the moment came, we had to really significantly increase our procurement, a substantial for the second half as Colette said. Now let me talk about — let me talk about the bigger-picture. And why the entire world’s data centers are moving towards accelerated computing. It’s been known for some time, and you’ve heard me talk about it that accelerated computing it’s a full stack problem but — it’s a full stack challenge, but If we could successfully do it in a large number of application domain has taken us 15 years sufficiently that almost the entire data centers major applications could be accelerated, you could reduce the amount of energy consumed and the amount of cost for our data centers substantially by an order of magnitude. It takes — it cost a lot of money to do it because you have to do on the software and everything and you have to build all the systems and so on and so forth. But we’ve done that 15 years. And what happened is, when generative AI came on, it triggered a killer app for this computing platform that’s been in preparation for some time. And so now we see ourselves in two simultaneous transitions, the world’s $1 trillion datacenter is nearly populated entirely by CPUs today. And $1trillion is $250 billion a year, it’s growing of course. But over the last four years, call it a trillion dollars worth of infrastructure installed. It’s all completely based on CPUs and dumbnics. It’s basically unaccelerated. In the future, it’s fairly clear now with this — with Generative AI becoming the primary workload of most of the world’s data centers in generating information, it is very clear now that — and the fact that accelerated computing is so energy-efficient that the budget of the data center will shift very dramatically towards accelerated computing and you’re seeing that now. We’re going through that moment right now as we speak as well, while the world’s data center capex budget is limited. At the same time we’re seeing incredible orders to retool the world’s datacenters.

So. I think you’re starting — you’re seeing the beginning of call it a 10-year transition to basically recycled or reclaimed the world’s data centers and build it out as accelerated computing. You’ll have — you’ll have a pretty dramatic shift and the spend of the datacenter from traditional computing to accelerated computing with smart NICs, smart switches, of course, GPUs and and the workload is going to be predominantly Generative AI. We’ll move to our next question. Vivek Arya with BoA Securities, your line is open. Well, thanks for the question. Colette, just wanted to clarify, does visibility mean data center sales can continue to grow sequentially in Q3 and Q4 or do they sustain at that Q2 levels, so just wanted to clarify that. And then Jensen, my question is that given this very strong demand environment what does that do to the competitive landscape. Does it invite more competition in terms of custom ASICs, does it invite more competition in terms of other GPU solutions or other kinds of solutions. How do you see the competitive landscape changed over the next two to three years?

Colette Kress — Executive Vice President, Chief Financial Officer

Yeah. Vivek, thanks for the question. Let me see if I can add a little bit more color. We believe that the supply that we will have for the second half of the year will be substantially larger and H1. So we are expecting not only the demand that we just saw in this last quarter, the demand that we had in Q2 for our forecast. But also planning on saving something in the second half of the year. We just have to be careful here, that we are not here to guide on the second half, but yes, we do, we do plan a substantial increase in the second half compared to the first half.

CJ Muse — Evercore ISI — Analyst

Regarding competition. Vivek, we have Competition from every direction. Start-ups really-really well-funded and innovative startups, countless of them, all over the world. We have competition from existing semiconductor companies. We have competition from CSPs with internal projects and many of you, know about most of these. And so we’re mindful of competition all-the-time and we get competition all-the-time. NVIDIA’s value proposition at the court is we are the lowest-cost solution. We’re the lowest TCO solution and the reason for that is because accelerated computing is — two things that I talk about often, which is it’s a full stack problems, it’s full stack challenge, you have to engineer, all of the software and all the libraries and all the algorithms integrated into an optimized the frameworks and optimize it for the architecture of not just one ship at the architecture of an entire datacenter, but all the way into the frameworks, all the way into models. And the amount of engineering and distributed computing fundamental computer science work is really quite extraordinary. It is the hardest computing as we know. And so, number-one, it’s a full stack challenge and you have to optimize it across the whole thing and across just — mind blowing number of stacks. We have 400 acceleration libraries, as you know, the amount of libraries and frameworks that we accelerate is pretty mind blowing. The second part is that Generative AI is a large-scale problems, and it’s a data center scale problem, it’s another way of thinking that the computer is the data center or the Data Center is the Computer is not the chip, it’s the datacenter and it’s never happened like this before. And in this particular environment, your networking operating system, your distributed computing engines, your understanding of the architecture of of the networking gear, the switches and the computing systems, the computing fabric that entire system is your computer and that’s what you’re trying to operate.

And so in order to get the best performance, you have to understand full stack is the standard center scale and that’s what accelerated computing is. The second thing is that, utilization, which talks about the amount of the types of applications that you can accelerate and diverse excelity[Phonetic] of your architecture, keep that utilization high, if you can do one thing and do one thing only incredibly fast, then your data center is largely underutilized and it’s hard to scale that out and if it is universal GPU and the fact that we accelerates some many stack makes our utilization incredibly high. And so number-one is throughput and that’s software intensive problems and datacenter architecture problems. Second is utilization versatility problem and the third is just data center expertise. We’ve built five data centers of our own and we’ve helped companies all of the world build data centers. And we integrate our architecture into all the world’s clouds, from the moment of delivery of the product to standing up in the deployment, the time to operations of the data center is measured not — it can — if you’re not good at, not to — not proficient at it, it could take months. Standing up a supercomputer, let’s see, some of the largest supercomputers in the world where installed about a year and a half ago and now they are coming online. And so it’s not — it’s not unheard of to see a delivery to operations of about a year. Our delivery to operations, measured in weeks and we’ve taken data centers and supercomputers and we’ve turn it into products and the expertise of the team in doing that is incredible. And so our value proposition is in the final analysis, all of this technology translates in the infrastructure, the highest throughput in the lowest possible cost and so I think our market is of course very competitive, very large, but the challenge is really, really great.

Operator

Next we go to Aaron Rakers with Wells Fargo. Your line is open.

Aaron Rakers — Wells Fargo — Analyst

Yeah, thank you for taking the question and congrats on the quarter. As we kind of think about unpacking the various different growth drivers of the data center business going-forward. I’m curious Colette, just how we should think about that monetization effect of software considering at the expansion of your cloud service agreements continues to grow. I’m curious of where do you think we’re at in terms of that approach in terms of the AI Enterprise software suite and other drivers of software-only revenue going-forward?

Colette Kress — Executive Vice President, Chief Financial Officer

Thanks for the question. Software is really important to our accelerated platforms, not only do we have a substantial amount of software that we’re including in our nearest architecture in essentially all products that we have, we are now with many different models to help customers start their work in Generative AI and accelerated computing. So anything that we have here from DGX Cloud on providing those services, helping them build models or as you’ve discussed the importance of NVIDIA AI Enterprise essentially bought operating system for AI. So all things should continue to grow as we going forward, both the architecture and infrastructure, as well as the both availability of the [Indecipherable] I’ll turn over to Jensen, see if he needs to add.

Jensen Huang — Founder and Chief Executive Officer

Yeah, we are — we can see in real-time the growth both Generative AI and CSPs. Now both for training the models, refining the models, as well as deploying models. As Colette said earlier. Inference is now a major driver of accelerated computing because Generative AI is used so capably in so many applications already. There are two segments that requires a new stack of software and the two segments, our enterprise and industrials. Enterprise requires a new stack software, because many enterprises need to have all the capabilities that we’ve talked about, whether it’s large language models, the ability to adapt and for your proprietary use case in your proprietary data align it to your own principles and your own operating domains. You want to have the ability to be able to do that in a high-performance computing sandbox and we call that DGX Cloud. and create a model. Then you to deploy your Chatbuy for your AI, in any cloud, because your services and your agreements with multiple cloud vendors and depending on the applications, you might deploy it on various clouds.

For the enterprise, we have NVIDIA AI Foundation for helping you create custom models and we have NVIDIA AI Enterprise. NVIDIA AI Enterprise is the only accelerated stack — GPU accelerated stack in the world that is enterprise safe and enterprise supported. There are constant catching that you have to do there 4000 different packages that buildup NVIDIA AI enterprise and represents the operating engine — end-to-end operating engine of the entire AI workflow. The only one of its kind from data ingestion, data processing, obviously, in order to train an AI module, you have a lot of data you have to process and package up and curate and align and there’s just a whole bunch of stuff that you have to do to the data to prepare it for training, that amount of data could consume some 40%, 50%, 60% of your computing time and so data processing data processing is very big deal. And then the second aspect of it is training the model refining the model and the third is deploying model for inferencing.

NVIDIA AI enterprise supports and patches and security patches continuously all of those 4000 packages of software. And for an enterprise that wants to deploy their engines, just like they want to deploy Red Hat Linux, this is incredibly complicated software in order to deploy that into the cloud and as well as on-premise, it has to be secure, it has to be supported. And so NVIDIA AI Enterprise, is the second point. The third is Omniverse. Just as people are starting to realize that you need to align an AI to FX. The same for robotics, we need to align the AI for physics and aligning an AI for ethics includes a technology called reinforcement learning human feedback in the case of industrial applications and robotics it’s reinforcement learning Omniverse feedback. And Omniverse is a vital engine for software-defined and robotic applications in industries. So Omniverse also needs to be a cloud service platform. And so our software stack, the three software stacks AI Foundation, AI Enterprise and Omniverse runs in all of the worlds — all of the world’s clouds that we have partnerships — DGX Cloud partnerships with Azure, now we have partnerships on both AI, as well as Omniverse, with GCP and Oracle. We have great partnerships in DGX Cloud for AI and Enterprises integrated into all three of them and so I think the — in order to for us to extend the reach of AI beyond the cloud and into the world’s enterprise and into the world’s industries, you need two new types of — you need new software stacks in order to make that happen and by putting it in the cloud, integrated into the world’s CSP clouds, it’s a great way for us to partner with the sales and the marketing team and the leadership team of all the cloud vendors.

Operator

Next we’ll go to Timothy Arcuri with UBS. Your line is open.

Timothy Arcuri — UBS — Analyst

Thanks a lot. I had a question and then I had a clarification as well. So the question first is Jensen on the InfiniBand versus Ethernet argument, can you sort of speak to that debate and maybe how you see it playing out. I know you need the low-latency of InfiniBand for AI, but can you sort of talk about the attach rate of your InfiniBand solutions to what you’re shipping on the core compute side and maybe whether that’s similarly crowding out Ethernet like you are with on the compute side? And then the clarification, Colette, is that there wasn’t to a share buyback, despite you still having about $7 billion on the share repo authorization, was that just timing? Thanks.

Jensen Huang — Founder and Chief Executive Officer

Colette, how about you go first and take that question?

Colette Kress — Executive Vice President, Chief Financial Officer

That is correct, we have $7 billion available in current authorization for repurchases. We did not repurchase anything in this last quarter, but we do repurchase opportunistically and we’ll will consider that as we go forward as well. Thank you.

Jensen Huang — Founder and Chief Executive Officer

InfiniBand and Ethernet are target different applications in a data center. They both have their place. InfiniBand had a record quarter. We’re going to have a giant record year and InfiniBand has really envision — Quantum InfiniBand has as an exceptional roadmap. It’s going to be really incredible. The two networks are very different, InfiniBand is designed for an AI Factory, if you will. If that data center is running a few applications for a few people for a specific use case and it’s doing it continuously and that infrastructure costs you pick a number $500 million. The difference between infiniBand and ethernet could be 15% to 20% in overall throughput. And if you spend $500 million in an infrastructure and the difference is 10% to 20% and it’s a $100 million, InfiniBand is basically free, that’s the reason why people use it. InfiniBand is effectively free. The difference in datacenter throughput, it’s just. It’s too great to ignore and you using it for one application, and so however, if your data center is a cloud datacenter and it’s multi-tenant. It’s a bunch of little jobs, a bunch of little jobs and is shared by millions of people. Can Ethernet is really do I answer, there’s a new segment in the middle, where the cloud is becoming a Generative, AI, cloud. It’s not an AI Factory per se, but it’s still it’s multi-tenant cloud but it wants to run Generative AI workloads. This new segment, is a wonderful opportunity. And at Computex I referred to it at the last GTC. At Computex, we’re going to announce a major product-line for this segment, which is Ethernet focused Generative AI application. type of clouds. But infiniBand is doing fantastically and we don’t record — record numbers quarter-on-quarter, year-on year.

Operator

Next we’ll go to Stacy Rasgon with Bernstein Research. Your line is open.

Stacy Rasgon — Bernstein — Analyst

Hi guys, thanks for taking my question. I had a question on inference versus training for Generative AI. So you’re talking about inference is being a very large opportunity. I guess, two sub parts of that, is that because inference basically scales, versus like the usage versus like training, is more of a one and dummy[Phonetic]. And can you give us some sort of even if it’s just like, qualitatively, like it do you think are inference is bigger than training or vice-versa, like if it’s bigger, how much bigger. Is it like the opportunities at buybacks is a 10 times. Is there anything you can give us on those two workloads within generative AI this will be helpful?

Jensen Huang — Founder and Chief Executive Officer

Yeah. I’ll work backwards. You’re never done with training. You’re always — every time you deploy, you’re collecting new data when you collecting data you train with the new data. And so you’re never done training. You’re never done producing and processing a vector database that augments the large language model. You’re never done with vectorizing all of the collected structured — unstructured data that you have. And so whether you’re building a recommender system large language model, a vector database these are probably the three major applications of — the three core engines, if you will, of the future of computing. So much of that stuff, but obviously these are very — three very important ones. They are always always running. You’re going to see that more-and-more companies realize they have a factory for intelligence — an intelligence factory and in that particular case, it’s largely dedicated to training and processing data and vectorising data and learning representation of the data, so on and so forth.

The inference part of it, our APIs that are either open APIs that can be connected to all kinds of — all kinds of applications, APIs that is integrated into workflows. But APIs of all kinds — there will be hundreds of APIs in the company, some of they built themselves, some of them part that — many of them could come from companies like ServiceNow and Adobe that we’re partnering with an AI Foundations. And they’ll create a whole bunch of Generative AI APIs that companies can connect into their workflows or use as an application. And of course, there will be a whole bunch of Internet service companies.

So. I think you’re you’re seeing for the very first time simultaneously a very significant growth in the segment of AI factories. As well as a market that — a segment that really didn’t exist before, but now it’s growing exponentially practically by the weak. For AI inference with APIs, the simple way to think about it in the end, is that the world has a $1 trillion of data center installed and they used to be 100% CPUs. In the future, we know — we’ve heard it in enough places and and I think this year some ISC keynote was actually about the end of Moore’s Law. We’ve seen in a lot of places now that you can’t reasonably scale out data centers with general purpose computing and that accelerated computing is the path forward and now it’s got a — it’s got a killer app and it’s gone Generative AI and — so the easiest way to think about that is your $1 trillion infrastructure, every quarters capital capex budget would lean, very heavily into Generative AI into accelerated computing infrastructure everywhere from the number of GPUs, that would be used in the capex budget to the accelerated switches and accelerated networking chips that that connect them all. The easiest way to think about that is over the next four years or five years, 10 years. Most of that $1 trillion and then compensating adjusted for all the growth in data center still it will be largely Generative AI and so — so that’s probably the easiest way to think about and that’s training as well as inference.

Operator

Next we’ll go to Joseph Moore with Morgan Stanley. Your line is open.

Joseph Moore — Morgan Stanley — Analyst

Great. Thank you. I wanted to follow-up on that, in terms of the focus on inference. It’s pretty clear that this is a really big opportunity around large language models, but the cloud customers are also talking about trying to reduce cost per query by very significant amounts. You can talk about the ramifications of that for you guys, is that where some of the specialty inference products that you launched at GTC come in and just, how are you going to help your customers get the cost per query down?

Jensen Huang — Founder and Chief Executive Officer

Yeah, that’s a great question. Whether you’re — you start by building a large language model and you use that large language model very large version and you could distill them into medium, small and tiny sized. And the tiny size ones, you can put in your phone and your PC and so on and so forth and they all have they all have — it seems surprising, but they all can do the same thing. But obviously, obviously the zero shot or generalize ability of the large language model the biggest one is much more versatile and it can do a lot more amazing things and the large one would teach the smaller ones on how to be good AIs and so you use the large one to generate — prompts to align smaller wants and so on and so forth. And so you start by building very large ones and then you also have to train, whole bunch of smaller ones. Now, that’s exactly the reason why we have so many different sizes of our inference. You saw that I announced L4, L40. H100 MDL, it — which also has H100 and then it has — and then we have H100 HGX and then we have H100 multi node with NVlink. And so there’s — you could have model sizes of any kind that you like. The other thing, that’s important is, these are models but they are connected, ultimately to applications and the applications could have image in, video out, video in, text out, Image in proteins out, text in 3D out, video in the future, 3D graphics out. So the input and the output, requires a lot of pre and post-processing. The pre and post-processing it can’t be ignored. And this is one of the things that, that most of a specialized chip arguments fall apart and it’s because the — the model itself is only call it 25% of the data product of the overall processing of inference. The rest of it is, about preprocessing, post-processing, security, decoding all kinds of things like that.

And so — so. I think the — we — the multi-modality aspect of inference, the multi-diversity of inference that it’s going to be done in the cloud on-prem, it’s going to be done in multi-cloud that’s the reason why we have an AI enterprise in all the clouds. It’s going to be done on-prems reason why we — we have great partnership with Dell, we just announced the other day, called Project Helix, that’s going to be integrated into third-party services. That’s the reason why we have a great partnership with ServiceNow, and Adobe, because they’re going to be creating a whole bunch of Generative AI capabilities. And so, there’s — the diversity and the reach of Generative AI is so broad, you need to have some very fundamental capabilities like what I just described, in order to really address address the, the whole space of it.

Operator

Next we’ll go to Harlan Sur with JP Morgan. Your line is open.

Harlan Sur — JP Morgan — Analyst

Hi, good afternoon and congratulations on the strong results and execution. I really appreciate more of the focus or some of the focus today in your networking products. I mean, it’s really an integral part to sort of maximize the full performance of your compute platforms. I think data center networking business is driving about a $1 billion of revenues per quarter plus or minus. That’s 2.5 times growth from three years ago when you guys acquired Mellanox. So a very strong growth. But given the very-high attach of your InfiniBand and Ethernet solutions, your accelerated compute platforms is the networking run-rate stepping up in-line with your compute shipments and then what is the team doing to further unlock more networking bandwidth going-forward just to keep pace with the significant increase in compute complexity, datasets, requirements for lower latency, better traffic predictability and so on?

Jensen Huang — Founder and Chief Executive Officer

Yeah. Harlan, I really appreciate that. Nearly, everybody who thinks about AI they think about that chip, that accelerator chip and in fact this is the whole point nearly completely. And I’ve mentioned before that accelerated computing is about the stack more than software. And networking, remember, we announced a very, very early-on in this networking stack called Doca and we have the acceleration library call Magnum IO, these two pieces, the software are some of the crown jewels of our company. Nobody ever talks about it, because it’s hard to understand, but it makes it possible for us to connect tend to thousands of GPUs. How do you connect 10s of thousands of GPUs it if the operating system of the data center, which is the infrastructure, is not insanely great. And so — so that’s, that’s the reason why we’re so obsessed about networking in the company. And one of the great things that we have — we have Mellanox, as you know quite well. Once the world’s highest performance — the unambiguous leader in high-performance networking is the reason why our two companies are together. You also see that our network expands starting from NVlink, which is a computing fabric with really super long latency and it communicates using memory references not network package. And then we take Nvlink, we connected inside multiple GPUs and I described going beyond the GPU. And I’ll talk a lot more about that COMPUTEX in a few days. And then that gets connected to InfiniBand, which includes the NIC, and the smart NIC, BlueField-3 that we’re in-full production with and the switches, all of the fiber-optics that are optimized end-to-end. These things are running at an incredible line rates. And then beyond that, If you want to connect the smart AI factory to — this AI factory into your computing fabric, we have a brand-new type of Ethernet that we’ll be announcing. at COMPUTEX and so — this whole area of the computing fabric extending connecting all of these GPUs and computing units together, all the way through the networking, through the switches, the software stack is insanely complicated and so we’re — I’m delighted you understand it, and — but this — but we don’t break it out particularly, because we think of the whole thing as a computing platform as it should be. We sell it to all of the world’s data centers as components, so that they can integrated into whatever style or architecture that they would like and we can still run our software stack. That’s the reason why we break it up, it’s way more complicated the way that we do it, but it makes it possible for NVIDIA’s computing architecture to be integrated into anybody’s data center in the world from cloud of all different kinds to on-prem of all different kinds, all the way out to the edge to 5G and so this this way of doing it is really, really complicated, but it gives us incredible reach.

Operator

And our last question will come from Matt Ramsay with TD, Cowen. Your line is open.

Matt Ramsay — Cowen and Company — Analyst

Thank you very much. Congratulations Jensen and to the whole team. One of the things. I wanted to dig into a little bit is the DGX Cloud offering. You guys have been working on this for sometimes, behind the scenes, where you sell-in the hardware to your hyperscale partners and then lease it back for your own business, and the rest of us kind of found out about it publicly a few months ago. And as we look-forward over the next number of quarters as Colette discussed a high visibility in the data center business. Maybe you could talk a little bit about the mix you’re seeing of hyperscale customers buying for their own first-party internal workloads versus their own sort of third-party their own customers versus what of that big upside in data center going-forward is — systems that you’re selling in, with potential to support your DGX Cloud offerings and what you’ve learned since you’ve launched it about the potential of that business? Thanks. Yeah. Thanks, Matt. It’s — without being too specific about numbers, but the ideal scenario — the ideal mix is something like 10% NVIDIA DGX Cloud and 90% the CSPs clouds. And the reason — and our DGX Cloud is does — the various stack is the pure NVIDIA stack, it is architected the way we like and achieves the best possible performance. It gives us the ability to partner very deeply with the CSPs to create the highest performing infrastructure, number-one. Number two, it allows us to partner with the CSPs to create markets like, for example, we’re partnering with Azure to bring Omniverse cloud to the world’s industries. And the world’s never had a system like got. The computing stack with all the Generative AI start and all the 3D start and the physics stuff incredibly large database and really high-speed networks and low-latency networks that kind of a virtual — industrial virtual world has never existed before. And so we partnered with Microsoft to create Omniverse cloud inside Azure cloud. So it allows us number two to create new applications together and develop new markets together and we go-to-market as one team and we benefit by getting our customers on our computing platform and they benefit by having us in their cloud. Number-one, but number two, the amount of data in services and security services and all — all of the amazing things that Azure and GCP and OCI have, they can instantly have access to that through Omniverse cloud. And so it’s a huge win-win and for the customers the way that NVIDIA’s cloud works for these early applications. They can do it anywhere. So one one standards stack runs in all the clouds and if they would like to take their software and run it on the CSPs cloud themselves and manage it themselves, we’re delighted by that because NVIDIA AI enterprise, NVIDIA AI Foundations — and long-term — this is going to take a little longer but NVIDIA Omniverse will run into CSPs clouds. Okay, so — so the our goal, our goal really is to drive architecture, to partner deeply in creating new markets and the new applications that we’re doing and provide our customers with flexibilities to run NVIDIA everywhere, including on-prem. And so — so those were the primary reasons for it and it’s worked out incredibly our partnership with the three CSPs and we hadn’t done it that we currently have DGX Cloud and their sales force and marketing teams or leadership team is really quite spectacular. It works great.

Operator

Thank you. I’ll now turn it back over to Jensen Huang for closing remarks.

Jensen Huang — Founder and Chief Executive Officer

The computer industry is going through two simultaneous transitions. Accelerated computing and Generative AI. CPU scaling has slowed, yet computing demand is strong and now with Generative AI supercharged. Accelerated computing, a full stack and data center scale approach that NVIDIA pioneered is the best path forward. There’s a $1 trillion installed in the global datacenter infrastructure based on the general-purpose computing method of the last era. Companies are now racing to deploy accelerated computing for the generative AI era.

Over the next decade. Most of the world’s data centers will be accelerated. We are significantly increasing our supply to meet their surging demand. Large language models can learn information encoded in many forms guided by large language models, Generative AI models can generate amazing content and with models to fine-tune guardrail, align to guiding principles and ground to facts Generative AI is emerging from labs and is on its way to industrial applications. As we scale with cloud and Internet service providers, we are also building platforms for the world’s largest enterprises. Whether within one of our CSP partners or on-prem with Dell Helix, whether on a leading enterprise platform like ServiceNow and Adobe or a bespoke with NVIDIA AI Foundations we can help enterprises leverage their domain expertise and data to harness Generative AI securely and safely.

We are ramping a wave of products in the coming quarters. Including H100, our Grace and Grace Hopper superchips and our Bluefield-3 and Spectrum-4 networking platform. They are all-in production. They will help deliver data centers scale computing that is also energy-efficient and sustainable computing. Join us next week at Computex and we’ll show you what’s next. Thank you.

Operator

[Operator Closing Remarks]

Disclaimer

This transcript is produced by AlphaStreet, Inc. While we strive to produce the best transcripts, it may contain misspellings and other inaccuracies. This transcript is provided as is without express or implied warranties of any kind. As with all our articles, AlphaStreet, Inc. does not assume any responsibility for your use of this content, and we strongly encourage you to do your own research, including listening to the call yourself and reading the company’s SEC filings. Neither the information nor any opinion expressed in this transcript constitutes a solicitation of the purchase or sale of securities or commodities. Any opinion expressed in the transcript does not necessarily reflect the views of AlphaStreet, Inc.

© COPYRIGHT 2021, AlphaStreet, Inc. All rights reserved. Any reproduction, redistribution or retransmission is expressly prohibited.

Most Popular

Dollar Tree (DLTR): Key points of note from the Q3 2024 earnings report

Shares of Dollar Tree, Inc. (NASDAQ: DLTR) rose over 2% on Wednesday after the company delivered better-than-expected earnings results for the third quarter of 2024. The discount retailer beat estimates

FL Earnings: Foot Locker Q3 2024 adj. profit rises; sales down 1%

Foot Locker, Inc. (NYSE: FL), a leading footwear and apparel retailer, reported an increase in adjusted profit for the third quarter of 2024 and a modest decrease in sales. Net sales

Infographic: How The Campbell’s Company (CPB) performed in Q1 2025

The Campbell's Company (NASDAQ: CPB) reported net sales of $2.8 billion for the first quarter of 2025, up 10% from the same period a year ago. Organic sales decreased 1%.

Add Comment
Loading...
Cancel
Viewing Highlight
Loading...
Highlight
Close
Top