Categories Earnings Call Transcripts, Technology

NVIDIA Corp (NVDA) Q1 2022 Earnings Call Transcript

NVDA Earnings Call - Final Transcript

NVIDIA Corp (NASDAQ: NVDA) Q1 2022 earnings call dated May. 26, 2021

Corporate Participants:

Simona Jankowski — Investor Relations

Colette Kress — Executive Vice President and Chief Financial Officer

Jensen Huang — Founder, President and Chief Executive Officer

Analysts:

Timothy Arcuri — UBS — Analyst

C.J. Muse — Evercore ISI — Analyst

Aaron Rakers — Wells Fargo — Analyst

Vivek Arya — Bank of America Securities — Analyst

John Pitzer — Credit Suisse — Analyst

Stacy Rasgon — Bernstein — Analyst

Presentation:

Operator

Good afternoon. My name is Sydney and I will be your conference operator today. At this time, I would like to welcome everyone to the NVIDIA’s Financial Results Conference Call. [Operator Instructions]

Simona Jankowski, you may begin your conference.

Simona Jankowski — Investor Relations

Thank you. Good afternoon, everyone, and welcome to NVIDIA’s conference call for the first quarter of fiscal 2022. With me on the call today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.

I’d like to remind you that our call is being webcast live on NVIDIA’s Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the second quarter of fiscal 2022. The content of today’s call is NVIDIA’s property. It can’t be reproduced or transcribed without our prior written consent.

During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today’s earnings release, our most recent forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission.

All our statements are made as of today, May 26, 2021, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.

With that, let me turn the call over to Colette.

Colette Kress — Executive Vice President and Chief Financial Officer

Thanks, Simona.

Q1 was exceptionally strong with revenue of $5.66 billion and year-on-year growth accelerating to 84%. We set a record in total revenue in Gaming, Data Center and Professional Visualization, driven by our best ever product lineups and structural tailwinds across our businesses.

Starting with Gaming, revenue of $2.8 billion was up 11% sequentially and up 106% from a year earlier. This is the third consecutive quarter of accelerating year-on-year growth beginning with the fall launch of our GeForce RTX 30 Series GPUs. Based on the Ampere GPU architecture, the 30 series has been our most successful launch ever, driving incredible demand and setting records for both desktop and laptop GPU sales. Channel inventories are still leading and we expect to remain supply constrained into the second half of the year.

With our Ampere GPU architecture now ramping across the stack in both desktops and laptops, we expect the RTX upgrade cycle to kick into high gear, as the vast majority of our GPU installed base needs to upgrade. Now Laptops continue to drive strong growth this quarter as we started ramping the Ampere GPU architecture across our lineup. Earlier this month, all major PC OEMs launched GeForce RTX 30 Series laptops based on the 3080, 3070 and 3060, as part of their spring refresh. In addition, mainstream versions based on the 3050 and 3050 Ti will be available this summer just in time for back-to-school starting at price points as low as $799.

This is the largest ever wave of GeForce gaming laptops, over 140 in total as OEMs address the rising demand for gamers, creators and students for NVIDIA’s powered laptops. The RTX 30 Series delivers our biggest generational ray tracing [Phonetic] performance ever. It also features our second-generation ray tracing technology and frame rate boosting, AI-powered DLSS. The RTX is a reset for graphics with over 60 accelerated games. This quarter, we added many more, including Call of Duty, Modern Warfare, Crysis Remastered and Outriders. We also announced that DLSS is now available in Unreal Engine 4 and soon in the Unity game engine, enabling game developers to accelerate frame rates with minimal effort.

The RTX 30 Series also offers NVIDIA Reflex, a new technology that reduces system latency. Reflex is emerging as a must-have feature for eSports gamers who play competitive titles like Call of Duty: Warzone, Fortnite, Valorant and Apex Legends. We estimate that about 75% of GeForce gamers play eSport games and 99% of eSports pros compete on GeForce.

We believe gaming also benefited from crypto mining demand, although it’s hard to determine to what extent. We’ve taken actions to optimize GeForce GPUs for gamers, while separately addressing mining demand with crypto currency minding processors, or CMPs. last week, we announced that newly manufactured GeForce RTX 3080, RTX 3070, and RTX 3060 Ti graphics cards will have their Ethereum mining capabilities reduced by half and carry a low hash rate, or LHR identifier. Along with the updated RTX 3060, this should allow our partners to get more GeForce cards into the hands of gamers at better prices. To help address mining demand, CMP products launched this quarter, optimized for mining performance and efficiency. Because they don’t meet the specifications required of a GeForce GPU, they don’t impact the supply of GeForce GPUs to gamers. CMP revenue was $155 million in Q1, reported as part of the OEM and other category. And our Q2 outlook assumes CMP sales of $400 million.

Our GeForce NOW cloud gaming platform passed 10 million registered numbers this quarter. GFN offers nearly 1,000 PC games from over 300 publishers, more than any other cloud gaming service including 80 of the most popular free to game — play games. GFN expands the reach of GeForce to billions of under-powered Windows PCs, Macs, Chromebooks, Android devices, iPhones and iPads. GFN is offered in over 70 countries with our latest expansions including Australia, Singapore and South America.

Moving to Pro Vis. Q1 revenue was $372 million, up 21% both sequentially and year-on-year. Strong notebook growth was driven by a new, sleek and powerful RTX-powered mobile workstations with Max-Q technology and the enterprises continue to support remote workforce initiatives. Desktop workstations rebounded as enterprise resumed the spending that has been deferred during the lockdown with continued growth likely as offices open. Key verticals driving Q1 demand include manufacturing, healthcare, automotive, and media and entertainment.

At GTC we announced the upcoming general availability of NVIDIA Omniverse Enterprise, the world’s first technology platform that enables global 3D design teams to collaborate in real time in a shared space, working across multiple software speeds. This incredible technology builds on NVIDIA’s entire body of work and is supported by a large, rapidly-growing ecosystem. Early adopters include sophisticated design teams at some of the world’s leading companies such as BMW Group, Foster and Partners and WPP. Over 400 companies have been evaluating Omniverse and nearly 17,000 users have downloaded the open beta. Omniverse is offered as a software subscription on a per-user and a per-server basis. As the world becomes more digital, virtual and collaborative, we see a significant revenue opportunity for Omniverse. We also announced powerful new Ampere architecture GPUs for next-generation desktop and laptop workstation. The new RTX-powered workstations will be available from all major OEMs.

Moving to automotive, Q1 revenue was $154 million, up 6% sequentially and down 1% year-on-year. Growth in AI cockpit revenue was partially offset by the expected decline in legacy infotainment revenue. We extended our technology leadership with the announcement of the next generation NVIDIA DRIVE Atlan SOC. Atlan will deliver an unrivaled 1,000 trillion operations per second of performance and integrate data center class NVIDIA BlueField networking and security technologies to enhance vehicle performance and safety, making it a true data center on wheels.

Atlan, which targets automakers’ 2025 models, will follow the NVIDIA DRIVE Orin SOC which delivers 254 TOPS that has been selected by leading vehicle makers for production timeline starting next year. The NVIDIA DRIVE platform has achieved global adoption across the transportation industry. Our automotive design win pipeline now exceeds 8 billion through fiscal 2027. Most recently for Volvo Cars announced that it will use NVIDIA DRIVE Orin, building on our next great momentum with some of the largest automakers including Mercedes-Benz, SAIC and Hyundai Motor Group.

In robotaxis, we added GM Cruise to the growing number of companies adopting the NVIDIA DRIVE platform, which include Amazon Zoox and DiDi. We’ve also had a great traction with new energy vehicle makers. Our latest wins include Faraday Future, R Auto, IM Motors, and VinFast, which joined previously-announced wins with SAIC, Nio, Xpeng and Li Auto.

In trucking, Navistar is partnered with TuSimple in selecting NVIDIA DRIVE for autonomous driving, joining previously-announced Volvo Autonomous Solutions and plus [Phonetic]. NVIDIA is helping to revolutionize the transportation industry. Our full stack software-defined AV and AI cockpit platform spans silicon, systems, software and AI data center infrastructure, enabling over-the-air upgrades to enhance safety and the joy of driving throughout the vehicle’s lifetime. Starting with our lead partner, Mercedes-Benz, NVIDIA DRIVE can transform the automotive industry with amazing technologies delivered through new software and services business models.

Moving to Data Center. Revenue topped $2 billion for the first time, growing 8% sequentially and up 79% from the year-ago quarter, which did not include Mellanox. Hyperscale customers led our growth this quarter as they built infrastructure to commercialize AI in their services. In addition, cloud providers have adopted the A100 to support growing demand for AI from enterprises, start-ups and research organizations. Customers have deployed NVIDIA’s A100 and DGX platforms to train deep neural networks with rising computational intensity led by two of the fastest growing areas of AI; natural language understanding and deep recommendators.

In March, Google Cloud platform announced general availability of the A100 with early customers including Square for its cash application and Alphabet’s DeepMind. The A100 is deployed across all major hyperscale and cloud service providers globally and we see strengthening demand in the coming quarters. Every industry is becoming a technology industry and accelerating investments in AI infrastructure both through the cloud and on-premise. Our vertical industries global sequentially and year-on-year led by consumer internet companies. For example, NAVER, a leading internet technology company in Korea and Japan, is training giant AI language models at scale on DGX SuperPOD to pioneer new services across ecommerce, search, entertainment, and payment applications.

We continue to gain traction in inference with hyperscale and vertical industry customers across a broadening portfolio of GPUs. We had record shipments of GPUs used for inference. Inference growth is driving not just the T4, which was up strongly in the quarter, but also the universal A100 Tensor Core GPU as well as the new Ampere architecture-based A10 and A30 GPUs, all excellent at training as well as inferencing.

Customers are increasingly migrated from CPUs to GPUs for AI inference for two chief reasons. First, GPUs can better keep up with the exponential growth in the size and the complexity of deep neural networks and respond with the required low latency. In April’s MLPerf AI inference benchmark, NVIDIA achieved the top results across every category, spanning computer vision, medical imaging, recommender systems, speech recognition, and natural language processing; and second, NVIDIA’s full stack inference platform including Triton’s inference server software simplifies the complexity of deploying AI applications by supporting models from all major frameworks, and optimizing for different query types including batch, real time and streaming. Triton is supported by several partners in the cloud services, including Amazon, Google, Microsoft and Tencent. Examples of how customers use NVIDIA’s inference platform includes Microsoft for grammar checking in Office, the United States Postal Service for real-time package analytics, T-Mobile for customer service, Pinterest for image search, and GE Healthcare for heart disease detection.

We also had strong results with Mellanox networking products. Like our compute business, strong growth was driven by hyperscale customers across both Ethernet and InfiniBand. We achieved key design wins and proof-of-concept trials through the NVIDIA BlueField-2 DPU with cloud service providers and consumer internet companies. We also unveiled BlueField-3, the first GPU built for AI and accelerated computing with support from VMware, NetApp, Splunk, Cloudflare, and others.

Bluefield-3 is the industry’s first 400 gig DPU and delivers the equivalent data center services of up to 300 CPU cores. It transforms traditional server infrastructure into zero trust environment in which every user is authenticated by offloading and isolating data center services for business applications. With Bluefield-3, our DPU roadmap will deliver an unrivaled 100x performance increase over a three-year period. As we look back at the first full year since closing the Mellanox acquisition, we are extremely pleased with how the business has performed. It has not only exceeded our financial projections, but it has been instrumental in key new platforms like the DGX SuperPOD and the BlueField DPU, enabling our Data Center scale computing strategy.

In April, we held our largest ever GPU Technology Conference with more than 200,000 registrants from 195 countries. Jensen’s keynote had over 14 million views. At GTC, we announced our first data center CPU, NVIDIA Grace, targeted processing massive next-generation AI models with trillions of parameters. The Arm-based processor will enable 10 extra performance and energy efficiency of today’s fastest servers. With Grace, NVIDIA has a three chip strategy with GPU, DPU and now CPU. The Swiss National Supercomputing Center and the US Department of Energy’s Los Alamos National Laboratory are the first to announce plans to build Grace-powered supercomputers.

Grace will be available in early 2023.

GTC is first and foremost for developers. We announced NVIDIA developed and optimized pre-trained model availability on the NVIDIA GPU Cloud registry. Developers can choose a pre-trained model and adapt it to fit their specific needs using NVIDIA TAO, our transfer learning software. TAO fine tunes the model with customer’s own small data set to get models accustomed to it without the cost, time and massive data sets required to train a neural network from scratch. Once a model is optimized and ready for deployment, users can integrate it with a NVIDIA application framework that fits their use.

For example, the NVIDIA Jarvis framework for interactive conversational AI is now generally available and used by customers such as T-Mobile and Snap and the NVIDIA MERLIN framework for deep recommendators is an open beta with customers such as Snap and Tencent. With the chosen application framework, users can launch NVIDIA Fleet Command software to deploy and manage the AI application across a variety of NVIDIA GPU-powered devices.

For enterprise customers, we unveiled a new enterprise-grade software offering available as a perpetual license or subscription, NVIDIA AI Enterprise, is a comprehensive suite of AI software that speeds development and deployment of AI workloads and simplifies management of enterprise AI infrastructure. Through our partnership with VMware, hundreds of thousands of vSphere customers will be able to purchase NVIDIA AI Enterprise with the same familiar pricing model that IT managers use to procure VMware Infrastructure software.

We also made several announcement at GTC about accelerating the delivery of both NVIDIA AI and accelerated computing to enterprises and edge users among the world’s largest industries. Leading server OEMs launched NVIDIA-certified systems, which are industry standard servers based on the NVIDIA EGX platform. They run NVIDIA AI Enterprise software and are supported by the NVIDIA A30 and A10 GPUs. Initial customers including Lockheed Martin, and Mass General Brigham.

In addition, we announced the NVIDIA AI on 5G platform supported on NVIDIA EGX servers to enable high performance 5G ram and AI applications. The AI on 5G platform leverages the NVIDIA Aerial software and the NVIDIA Bluefield-2 A100 converged card which combines our GPUs and DPUs. We are teaming with Fujitsu, Google Cloud, Mavenir, Radisys and Wind River in developing solutions based on our AI on 5G platform to speed the creation of smart cities and factories, advanced hospitals and intelligent stores.

Another highlight at GTC was the announcement of a broad range of initiatives to strengthen the Arm ecosystem across cloud, data centers, HPC, enterprise and edge, and PCs. In the cloud, we are bringing together AWS Graviton2 processors and NVIDIA GPUs to provide a range of benefits including lower-costs support for richer game streaming experiences and greater performance for Arm-based workloads. In HPC, we are bringing together an Ampere Altra CPU with NVIDIA GPUs, DPUs and NVIDIA HPC Software Development Kit.

Initial supercomputing centers deploying it include Oak Ridge and Los Alamos National Labs. In the enterprise and edge, we’re bringing together Marvell Arm-based OCTEON processors and the NVIDIA GPUs to accelerate video analytics and cybersecurity solutions. And in PCs, we are bringing together MediaTek’s Arm-based processors with NVIDIA’s RTX GPUs to enable realistic ray-trace graphics and cutting edge AI in a new class of Arm-based laptops.

On our Arm acquisition, we are making steady progress in working with the regulators across key regions. We remain on track to close the transaction within our original timeframe of early 2022. Arm’s IP is widely used, but the Company needs a partner that can help it achieve new heights. NVIDIA is uniquely positioned to enhance Arm’s capabilities, and we are committed to invest in developing the Arm’s ecosystem, enhancing R&D, adding IP and turbocharging its development to grow into new markets in the data center, IoT and embedded devices; areas where it only has a light footprint, or in some cases, none at all.

Moving to the rest of the P&L, GAAP gross margin for the first quarter was down 100 basis points from a year earlier and up 100 basis points sequentially. Non-GAAP gross margin was up 40 basis points from a year earlier and up 70 basis points sequentially. The sequential non-GAAP increase was largely driven by a more favorable mix within data centers and the addition of CMP products. Q1 GAAP EPS was $3.03, up 106% from a year earlier. Non-GAAP EPS was $3.66, up 103% from a year ago. Q1 cash flow from operations was $1.9 billion.

Let me turn to the outlook for the second quarter of fiscal 2022. We expect broad-based sequential year-on-year revenue growth in all of our markets platforms.

Our outlook includes $400 million in CMP. Aside from CMP, the sequential revenue increase in our Q2 outlook is driven largely by data center and gaming. In data center, we expect sequential growth in both compute and networking. In gaming, with the move to low hash rate GeForce CPUs and increase in the amount of CMP products, we are making a significant effort to serve liners with CMPs and provide more GeForce cards to gamers. If there is additional CMP demand, we have supply flexibility to support it. We believe these actions, combined with strong gaming demand, will drive an increase in our core gaming business for Q2.

Now to look at our outlook for Q2, revenue is expected to be $6.3 billion, plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 64.6% and 66.5%, respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $17.6 billion and $1.26 billion, respectively. GAAP and non-GAAP other income and expenses are both expected to be an expense of approximately $50 million. GAAP and non-GAAP tax rates are both expected to be 10%, plus or minus 1%, excluding discrete items. Capital expenditures are expected to be approximately $300 million to $325 million. Further financial details are included in the CFO commentary and other information available on our IR website.

In closing, let me highlight that Jeff Fisher and Manuvir Das will keynote Computex on the evening of May 31, U.S. time, as well as several upcoming events for the financial community. We’ll be virtually attending the Evercore TMT Conference on June 7, the BofA 2021 Global Technology Conference on June 9 and the NASDAQ Virtual Investor Conference on June 16. Our earnings call to discuss our second quarter results is scheduled for Wednesday, August 18.

With — now, we will open the call for question. Operator, would you please poll for questions?

Questions and Answers:

Operator

[Operator Instructions] And your first question comes from Timothy Arcuri with UBS.

Timothy Arcuri — UBS — Analyst

Thanks a lot. Colette, I was wondering if you can double click a little more on the guidance. I know, of the 600 to 650 [Phonetic] in growth, you said 250 [Phonetic] is coming from CMP and both Gaming and Data Center will be up. I — can we assume that they’re up about equally, so you’re getting about 200 roughly from each of those?

And I guess, second part of that is, within Data Center, I’m wondering, can you speak to the networking piece? It sounds like maybe it was up a bit more modestly than it’s been up the past few quarters. I’m just wondering what the outlook is there. Thanks.

Colette Kress — Executive Vice President and Chief Financial Officer

Yeah. Thanks so much for the question on our guidance. So I first want to start off with, we see demand really across all of our markets, all of our different market platforms, we do plan to grow sequentially. You are correct that we are expecting increase in our CMP and outside of our CMP growth, we expect the lion share of our growth to come from our Data Center and Gaming. In our Data Center business, right now, our product lineup couldn’t be better. We have a strong overall portfolio both for training and inferencing and we’re seeing strong demand across our hyperscales and vertical industries.

We’ve made a deliberate effort on the Gaming perspective to supply to our gamers the cards that they would like, given the strong demand that we see. So that will also support the sequential growth that we’re receiving. So you’re correct that we do see it — growth sequentially coming from Data Center and Gaming, both contributing quite well to our growth.

Timothy Arcuri — UBS — Analyst

Thanks a lot, Colette.

Colette Kress — Executive Vice President and Chief Financial Officer

I didn’t answer your second question, my apologies, on Mellanox. Additionally, Mellanox is an important part of our data center. It is quite integrated with our overall products. We did continue to see growth. This last quarter and we are also expecting them to sequentially grow as we move into Q2. They are smaller part of our overall Data Center business. But again, we do expect them to grow.

Operator

And your next question comes from C.J. Muse with Evercore ISI.

C.J. Muse — Evercore ISI — Analyst

Yeah, good afternoon, thank you for taking the question. In your prepared remarks, I think I heard you talk about a vision for acceleration in data center as we go through the year. And as you think about the purchase obligations that you reported, up 45% year-on-year, how much of that is related to long lead time data center and how should we interpret that in terms of what kind of ramp we could see in the second half, particularly as you think about perhaps adding more growth from enterprise on top of what was hyperscale-driven growth in the April quarter? Thank you.

Colette Kress — Executive Vice President and Chief Financial Officer

So let me take the first part of your question regarding our purchasing, our purchasing of inventory and what we’re seeing in just both our purchase commitments and our inventory. The market has definitely changed to where long lead times are required to build out our Data Center products. So we’re on a steady stream to both commit longer term so that we can make sure that we can serve our customers with the great lineup of products that we have. So yes, a good part of those purchase commitments is really about the long lead times of the components to create the full systems.

I will turn the second part of the question over to Jensen.

Jensen Huang — Founder, President and Chief Executive Officer

What was the second part of the question, Colette?

Colette Kress — Executive Vice President and Chief Financial Officer

The second part of the question was what do we see in the second half as it relates to the lineup of enterprise? And we articulated in our pre-remarks regarding — that we see an acceleration. Thank you.

Jensen Huang — Founder, President and Chief Executive Officer

Yeah. Yeah, we’re seeing strength across the board in Data Centers and we’re seeing strengthening demand. C.J., our data centers, as you know, is accelerating range of applications from scientific computing, both physical and life sciences, data analytics and classical machine learning, cloud computing and cloud graphics, which is becoming more important because of remote work and very importantly, AI, both for training, as well as inferencing for classical machine learning models like XGBoost, all the way to deep learning based models like conversational AI, natural language understanding, recommender systems and so on. And so, we have a large suite of applications and our NVIDIA AI and NVIDIA HPC as the case, accelerate these applications and data centers. They run on systems that range from HGX for the hyperscalers to DGX for on-prem to EGX for enterprise and edge, all the way out to our AGX, autonomous systems.

And this quarter, at GTC, we announced one of our largest initiatives and it’s taken us several years, you’ve seen working on it in open — out in the open over the course of the last several years and it’s called EGX, it’s our enterprise AI platform. We’re democratizing AI, we’ll bring it out of the cloud, we’ll bring it to enterprises and we’ll bring it out to the edge. And the reason for that is because the vast majority of the world at the automation that has to be done has data that has data sovereignty issues or data rate issues that can’t move to the cloud easily. And so we have to move the computing to the premise and oftentimes, all the way up to the edge. The platform has to be secure, has to be confidential, it has to be remotely manageable and of course, it has to be high-performance and it has to be cloud-native, and that’s a — be built like a cloud, the modern way of building cloud data centers.

And so these stacks has to be modern on the one hand, it has to be integrated into classical enterprise systems on the other hand, which is the reason why we’ve worked so closely with VMware and accelerated VMware’s operating system, data center our operating system, software-defined data center stacks on BlueField. Meanwhile, we reported NVIDIA AI and NVIDIA HPC on to VMware, so that they could run distributed large-scale accelerated computing for the very first time. And that partnership that was announced at VMworld, it was announced at GTC and we’re in the process of going to the market with all of our enterprise partners, their OEMs, their value-added resellers, their service — their solution integrators, all over the world.

And so, this is a really large endeavor and the early indications of it are really exciting and the reason for that is because, as you know, our data center business is more than 50% vertical industry enterprise already. It’s more than 50% vertical industry enterprises already and by creating this easy-to-adapt and easy-to-integrate stack is going to allow them to move a lot faster. And so this is the next major wave of AI. This is a very exciting part of our initiative and it’s something that I’ve been working on for — we’ve been working on for quite a long time and so I’m delighted with the launch this quarter at GTC.

The rest of the data center is doing great too. As Colette mentioned, hyperscale demand is strengthening. We’re seeing that for computing and networking. You know that the world’s cloud data centers are moving to deep learning because every small percentage that they get out of predictive inference drives billions and billions of dollars of economics for them. And so, the movement towards deep learning shifts the data center workload away from CPUs because accelerators are so important. And so, hyperscale, we’re seeing great traction and great demand.

And then lastly, supercomputing; supercomputing centers all over the world are building out and we’re really in a great position there to fuse for the very first time simulation-based approaches as well as data-driven-based approaches, what is called artificial intelligence. And so across the board, our data center is gaining momentum. And we see — we just see great strength right now and it’s growing strength and what really set up for years of growth in data center. This is the largest segment of computing as you know and this segment of computing is going to continue to grow for some time to come.

Operator

And your next question comes from Aaron Rakers with Wells Fargo.

Aaron Rakers — Wells Fargo — Analyst

Yeah, thanks for taking the questions. Congratulation on the results. I’m going to try to slip in two of them here. First of all, Colette, I think in the past, you talked about how much of your gaming installed base is kind of on the pre-ray tracing platforms are really kind of in context behind the upgrade cycle, that’s still in front of us. That’s going to question one.

And then, on the heels of the last question, I’m just curious, things like VMware’s Project Monterey as we think about the BlueField-2 product and Bluefield-3, how should we think about those starting to become or when should they become really material incremental revenue growth contributors for the Company? Thank you.

Colette Kress — Executive Vice President and Chief Financial Officer

So, yeah, we have definitely discussed in terms of the great opportunity that we have in front of us of folks moving to our ray-traced GPUs and we’re in the early stages of that. We had a strong cycle already, but still we probably have approximately 15% moving up a little bit from that at this time. So it’s a great opportunity for us to continue to upgrade a good part of that installed base, not only just with our desktop GPUs, but the RTX laptops are also a great driver of growth and upgrading folks to RTX.

Jensen Huang — Founder, President and Chief Executive Officer

Colette, do you want me take the second one?

Colette Kress — Executive Vice President and Chief Financial Officer

Yes, please.

Jensen Huang — Founder, President and Chief Executive Officer

Yeah. Aaron, good — great question on BlueField. First of all, the modern data center has to be re-architected for several reasons. There are several fundamental reasons that makes it very, very clear that the architecture has to change. The first insight is it’s cloud-native, which means that a data center is shared for everybody. You talk to a tenant [Phonetic], you don’t know who is coming and going and it’s exposed to everybody on the Internet.

Number two, you have to assume that it’s a zero-trust environment because you don’t know who’s using it. That used to be that we have perimeter security, but those days are gone, because it’s called native, its remote access, it’s multi-tenant, you’re — it’s public cloud, the infrastructure is used for internal and external applications. So number two has to be — it has to be zero trust.

The third reason is something that started a long time ago, which is software-defined in every way because you want — you don’t want a whole bunch of bespoke custom gear inside a data center. You want to operate the data center with software, you want it to be software-defined. The software-defined data center movement enabled this one pane of glass, a few IT managers orchestrating millions and millions of nodes of computers at one place. And the software runs what used to be storage, networking, security, virtualization, and all of that — all of those things have become a lot larger and a lot more intensive and it’s consuming a lot of the data center. And in fact, the estimate depending on how you want to think about how much security you want to put on it, if you assume that it’s a zero-trust data center, probably half of the CPU cores inside the data center is running not applications and then that’s kind of strange because you created the data center to run services and applications, which is the only thing that makes money.

The other half of the computing is completely soaked up running the software-defined data center, just to provide for those applications. And that you could imagine, even accepting if you like, as the cost of doing business. However, it commingles the infrastructure, the security plane and the application plane and exposes the data center to attackers. And so you fundamentally want to change the architecture as a result of that. To offload that software-defined virtualization and the infrastructure operating system, if you will, and the security services to accelerate it, because Moore’s Law has ended and moving software that was running on one set of CPUs, which is really, really good already, to another set of CPUs is going to make it more effective. Separating it doesn’t make it more effective.

And so, do you want to offload that and take the — take that application and software and accelerate it using accelerators, a form of accelerated computing. And so that’s — these things are fundamentally what BlueField is all about. And we created the processor that allows us to do — BlueField-2 replace as approximately 30 CPU cores, BlueField-3 replaces approximately 300 CPU cores, which is — to put it, give you a sense of it and a BlueField-4, we’re in the process of building already. And so we’ve had a really aggressive pipeline to do this.

Now, how big is this market? The way to think about that is every single networking chip in the world will be a smart networking chip. It will be a programmable, accelerated infrastructure processor. And that’s what the DPU is. It’s a data center on a chip. And I believe every single server node will have it, it will replace today’s mix with something like BlueField, and it will offload about half of the software processing that’s consuming data centers today, but most importantly, it will enable this future world where every single packet, every single application is being monitored in real time all the time for intrusion. And so how big is that application, how big is that market? Just 25 million servers a year, that’s the size of the market and we know the servers are growing. And so those give you a feeling for that.

And in the future, servers are going to move out to the edge and all of those edge devices will have something like BlueField. And then how we’re doing? We’re doing PLCs now with just about every Internet company. We’re doing really exciting work there. We’ve included it in high-performance computing, so that it’s possible for supercomputers in the future to be cloud-native, to be zero-trust, to be secured, and still be a supercomputer, and then we expect next year to have meaningful, if not significant revenues contribution from BlueField, and this is going to be a really large growth market for us. You can tell, I’m excited about this and I put a lot of my energy into it. The Company is working really hard on it, and this is a form of accelerated computing, that’s going to really make a difference.

Operator

And your next question comes from Vivek Arya with Bank of America Securities.

Vivek Arya — Bank of America Securities — Analyst

Thanks for taking my question. Jensen, is NVIDIA able to ring fence, this crypto impact in your CMP product? So even if, let’s say crypto goes away, for whatever reason, the decline is a lot more predictable and manageable than what we saw in the 2018, ’19 cycle. And then kind of part B of that is, how do you think about your core PC gamer demand? Because when we see these kind of 106% year-on-year growth rate, it brings questions of sustainability. So give us your perspectives on these two topics, just how does one ring fence kind of the crypto effect and what do you think about the sustainability of your core PC gamer demand? Thank you.

Jensen Huang — Founder, President and Chief Executive Officer

Sure. Thanks a lot. First of all, it’s hard to estimate exactly how much and where crypto mining is being done. However, we can only assume that the vast majority of it is contributed by professional miners, especially when the amount of mining increases tremendously like it has. And so we created the CMP and GeForce for mining, but you can’t use CMP for gaming. CMP is — yields better, and producing those doesn’t take away from the supply of GeForce. And so it protects our GeForce supply for the gamers.

And the question that you have is, what happens when on the tail end of this? There’s several things that we hope and we learned a lot from the last time that when you never learn enough about this dynamic. What we hope is that, that the CMPs will satisfy the miners and will stay in mines, in the professional mines. And we’re trying to produce a fair amount of them and we have — we secured a lot of demand for the CMPs and we will fulfill it.

And what makes it different this time is several things. One, we’re in the beginning of our RTX cycle, whereas Pascal was the last GTX. And now exactly was at the tail end of the GTX cycle. It was the last GTX and it was the tail-end of GTX cycle. We’re at the very beginning of the RTX 30 cycle. And because we reinvented computer graphics, we reset the computer industry. And after three years, the entire graphics industry has followed. Every game developers need to do ray tracing, every content developer and every content tool has moved to ray tracing. And so if you move to ray tracing, these applications are so much better, and they simply run too slow on GTXs. And so we’re seeing a reset of the install base, if you will.

And at a time, when the gaming market is the largest ever, we’ve got this incredible installed base of GeForce users. We’ve reinvented computer graphics and we’ve reset the install base, and created an opportunity that’s really exciting. At a time when the market is — the gaming market, the gaming industry is really large, and what’s really — sports — eSports, it’s infused into art. It’s infused into social. And so, gaming has such a large cultural impact now, it’s the largest form of entertainment. And I think that the experience we’re going through is going to last a while. And so, one, I hope that crypto will — the CMP will steer our GeForce supply to gamers. We see strong demand and I expect to see strong demand for quite some time because of the dynamics that I described. And hopefully, in the combination of those two, we’ll see strong growth and through strong growth in our core gaming business through the year.

Operator

And your next question comes from John Pitzer with Credit Suisse.

John Pitzer — Credit Suisse — Analyst

Yeah, good afternoon, guys. Thanks for letting me ask a question. Jensen, I had two hopefully quick questions. First, I harken back to the mantra you guys put out a couple of analysts days ago, the more you spend, the more you save. And you’ve always been very successful as you brought down the cost of doing something to really drive penetration growth. And so I’m curious with the NVIDIA enterprise AI software stack. Is there a sense that you can give us as how much that brings down the cost of deployment in AI inside the enterprise? And do you think whether COVID lockdown related or cost related, there’s pent-up demand that this unlocks?

And then my second question is just around government subsidies. A lot of talks out of Washington about subsidizing the chip industry, a lot of that goes towards building fabs domestically. But when I look at AI, I can’t think of anything more important to maintain sort of leadership in — relative to national security. How do we think about NVIDIA and kind of the impact that these government subsidies might have on either you or your customers or your business trends?

Jensen Huang — Founder, President and Chief Executive Officer

The more you buy, the more you shall save, there’s no question about that. And the reason for that is because we’re in the business of accelerated computing. We don’t accelerate every application. However, for the applications we do accelerate, the acceleration is so dramatic and because we sell a component, the entire system, the TCO, the TCO of the entire system, and all the service and all the people and the infrastructure and the energy cost, has been reduced by X factors, sometimes 10 times, sometimes 15 times, sometimes 5 times.

And so when we set our mind on accelerating a certain class of applications and recently we worked on cuQuantum, so that we could help the quantum industry, quantum computing industry accelerate their simulators so that they could discover new algorithms and invent future computers. Even though it won’t happen until 2030, for the next 20 years, we’re going to — 15 years, we’re going to have some really, really great work that we can do using NVIDIA GPUs to do quantum simulations.

We recently did a lot of work in natural language understanding and computational biology so that we could decode biology and understand how biology is — to infer to understand it and to predictively improve upon it and design new proteins. Those words are so vital. And that’s what accelerated computing is all about.

Our enterprise software, and I really appreciate the question. Our Enterprise software used to be just about the vGPU, which is virtualizing GPU inside the VMware environment, or inside the Red Hat environment and makes it possible for multiple users to use one GPU, which is the nature of enterprise virtualization, but now with NVIDIA AI, NVIDIA Omniverse, NVIDIA Fleet Command, whether you’re doing collaboration or virtual simulations for robotics and digital twins, designing your factory or you’re doing data analytics, learning what the predictive features are that could create an AI model, predictive model that you can deploy out at the edge using Fleet Command. We now have an intense suite of software that is consistent with today’s enterprise service agreements. It’s consistent with today’s enterprise business models, and allows us to support customers directly, and provide them with the necessary service promises that they expect, because they’re delivering — they’re trying to build a mission-critical application on top.

And more importantly, by creating this — prioritizing our software, we provide the ability for our large network of partners, OEM partners, value-added resellers, system integrators, solution providers. For this large network of hundreds of thousands of IT sales professionals that we are connected to through our network, we give them a product that they can take to market. And so the distribution channel, the sales channel, VMware, the sales channel of Cloudera, the sales channel of all of our partners in EDA and design, Autodesk, Dassault, so on, so forth, all of these sales channels, and all of these partners are now partners and taking our stacks to market. And we have a fully integrated system that are open to the OEM, so that they could create systems that run the stack, and it’s all certified, all tested, all benchmarked. And, of course, very importantly, all supported.

And so this new way of taking our products to market, whereas our cloud business is going to continue to grow, and that’s that part of AI is going to continue to grow, that business is direct, we sell components directly to them, we support them directly. But there are 10 of those customers in the world. For enterprises, there are thousands; industries, far and wide. And so we now have a great stack and a great software stack that allows us to take it to the world’s market so that everybody could buy more and save more.

Operator

And your final question comes from Stacy Rasgon with Bernstein.

Stacy Rasgon — Bernstein — Analyst

Hi, guys, thanks for taking my questions. This one’s for Colette. So, Colette, last quarter, you had kind of suggested that Q1 would be the trough for, I guess, for gaming, as well as the rest of the Company, but gaming in particular, and it would grow sequentially through the year. I guess given the strength we’re seeing in the first half, you still believe that that is the case? And I kind of heard you guys, I think kind of dance around that point a little bit in response to one of the other questions, but could you clarify that? Is that still your belief that that core gaming business can grow sequentially through the rest of the year? And I guess same core, same question as well for data center, especially since sounds like hyperscale is now coming back, like after a few quarters of digestion and then all of the other tailwinds you talked about. I mean, is there any reason to think that data center itself shouldn’t also grow sequentially, like through the rest of the year?

Colette Kress — Executive Vice President and Chief Financial Officer

Yeah, Stacy, thanks for the question. So I first want to start with, when we talked about our Q1 results and when we’re looking at Q1, we were really discussing a lot about what we expected between Q4 and Q1. Given what we knew was still high demand for gaming, we believed we will continue to grow between Q4 and Q1, which often we don’t, and we absolutely have the strength and overall demand to grow. What that then led was, again, continued growth from Q1 to Q2, as we are working hard to provide more supply for the strong demand that we see.

We have talked about that we have additional supply coming. We expect to continue to grow as we move into the second half of the year as well for gaming. Now, we only guide one quarter at a time. But our plan is to take the supply, serve the overall gamers, work on building up the channel as we know the channel is quite lean. And so, yes, we do and still expect growth in the second half of the year particularly when we see the lineup of games on the holiday overall coming, the back to school, all very important cycles for us. And there’s a great opportunity to upgrade this RTX installed base.

Now, in terms of data center, we’ll work in terms of our guidance here. We have growth from Q1 to Q2 planned in our overall guidance. And we do see as things continue to open up, a time to accelerate in the second half of the year for data center. We have, again, a great lineup of products here. Couldn’t be a better lineup, now that we’ve also added the Infineon products, and the host of overall applications that are using our software that we have. So this could be an opportunity as well to see that continued growth. We’ll work in terms of serving the supply that we need for both of these markets. But yes, we can see definitely growth in the second half of the year.

Operator

There are no further questions at this time. CEO, Jensen Huang, I’ll turn the call back over to you.

Jensen Huang — Founder, President and Chief Executive Officer

Well, thank you. Thank you for joining us today. NVIDIA computing platform is accelerating. Launched at GTC, we are now ramping new platforms and initiatives. There are several that I mentioned. First, enabled by this fusion of NVIDIA RTX, NVIDIA AI, NVIDIA [Indecipherable], we built Omniverse, a platform for virtual worlds to enable tens of millions of artists and designers to create together in their own metaverses.

It’s Second, we laid the foundation to be a three-chip data center scale computing company with GPUs, DPUs and CPUs. Third, AI is the most powerful technology force of our time. We partner with cloud and consumer Internet companies to scale out and commercialize AI-powered services. And we’re democratizing AI for every enterprise and every industry. With NVIDIA EGX certified systems, the NVIDIA Enterprise AI suite, pre-trained models for conversational AI, language understanding, recommender systems and our broad partnerships across the IT industry, we are removing the barriers for every enterprise to access state-of-the-art AI.

Fourth, the work of NVIDIA Clara in using AI to revolutionize genomics and biology is deeply impactful for the health care industry, and I look forward to telling you a lot more about this in the future. And fifth, the electric, self-driving and software-defined car is coming. With NVIDIA DRIVE, we are partnering with the global transportation industry to reinvent the car architecture, reinvent mobility, reinvent driving and reinvent the business model of the industry. Transportation is going to be one of the world’s largest technology industries.

From gaming, metaverses, cloud computing, AI, robotics, self-driving cars, genomics, computational biology, NVIDIA is doing important work and innovating in the fastest-growing markets today. As you can see, on top of our computing platforms that span PC, HPC, cloud, enterprise to autonomous edge, we’ve also transformed our business model beyond chips. NVIDIA vGPU, NVIDIA AI Enterprise, NVIDIA Free Command and NVIDIA Omniverse adds enterprise software license and subscription to our business model, and NVIDIA GeForce Now and NVIDIA DRIVE with Mercedes-Benz as the lead partner, our end-to-end services on top of that.

I want to thank all of the NVIDIA employees and partners for the amazing work you’re doing. We look forward to updating you on our progress next quarter. Thank you.

Operator

[Operator Closing Remarks]

Disclaimer

This transcript is produced by AlphaStreet, Inc. While we strive to produce the best transcripts, it may contain misspellings and other inaccuracies. This transcript is provided as is without express or implied warranties of any kind. As with all our articles, AlphaStreet, Inc. does not assume any responsibility for your use of this content, and we strongly encourage you to do your own research, including listening to the call yourself and reading the company’s SEC filings. Neither the information nor any opinion expressed in this transcript constitutes a solicitation of the purchase or sale of securities or commodities. Any opinion expressed in the transcript does not necessarily reflect the views of AlphaStreet, Inc.

© COPYRIGHT 2021, AlphaStreet, Inc. All rights reserved. Any reproduction, redistribution or retransmission is expressly prohibited.

Most Popular

Infographic: Key highlights from Disney’s (DIS) Q2 2024 earnings results

The Walt Disney Company (NYSE: DIS) reported second quarter 2024 earnings results today. Revenues increased 1% year-over-year to $22.1 billion. Net loss attributable to The Walt Disney Company was $20

Earnings Preview: Alibaba likely to report mixed results for Q4

Alibaba Group Holding Limited (NYSE: BABA) will be reporting fourth-quarter 2024 financial results next week. Over the years, the e-commerce giant successfully diversified its business and emerged as a major

A look at Tyson Foods’ (TSN) expectations for the remainder of FY2024

Shares of Tyson Foods, Inc. (NYSE: TSN) fell over 7% on Monday. The stock has gained 7% year-to-date. The company delivered mixed results for the second quarter of 2024, with

Add Comment
Loading...
Cancel
Viewing Highlight
Loading...
Highlight
Close
Top