X

NVIDIA Corporation (NVDA) Q1 2023 Earnings Call Transcript

NVIDIA Corporation (NASDAQ: NVDA) Q1 2023 earnings call dated May. 25, 2022

Corporate Participants:

Simona Jankowski — Investor Relations

Colette Kress — EVP and Chief Financial Officer

Jensen Huang — Founder, President and CEO

Analysts:

C.J. Muse — Evercore ISI — Analyst

Matt Ramsay — Cowen and Company — Analyst

Stacy Rasgon — Bernstein Research — Analyst

Mark Lipacis — Jefferies — Analyst

Vivek Arya — BofA Securities — Analyst

Tim Arcuri — UBS — Analyst

Ambrish Srivastava — BMO — Analyst

Harlan Sur — JP Morgan — Analyst

Chris Caso — Raymond James — Analyst

Aaron Rakers — Wells Fargo — Analyst

Presentation:

Operator

Good afternoon. My name is David and I’ll be your conference operator today. At this time, I’d like to welcome everyone to NVIDIA’s First Quarter Earnings Call. Today’s conference is being recorded. All lines have been placed on mute to prevent any background noise. After the speakers’ remarks, there will be a question-and-answer session. [Operator Instructions] Thank you.

Simona Jankowski, you may begin your conference.

Simona Jankowski — Investor Relations

Thank you. Good afternoon everyone and welcome to NVIDIA’s conference call for the first quarter of fiscal 2023. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; Colette Kress, Executive Vice President and Chief Financial Officer.

I’d like to remind you that our call is being webcast live on NVIDIA’s Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the second quarter of fiscal 2023. The content of today’s call is NVIDIA’s property. It can’t be reproduced or transcribed without our prior written consent.

During this call we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today’s earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, May 25, 2022 based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.

During this call we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.

With that, let me turn the call over to Colette.

Colette Kress — EVP and Chief Financial Officer

Thanks, Simona. We delivered a strong quarter, driven by record revenue in both Data Center and Gaming with strong fundamentals and execution against a challenging macro backdrop. Total revenue of $8.3 billion was a record, up 8% sequentially and up 46% year-on-year. Data Center has become our largest market platform and we see continued strong momentum going forward.

Starting with Gaming. Revenue of $3.6 billion, rose 6% sequentially and 31% year-on-year, powered by the GeForce RTX 30 Series product cycle. Since launching in the fall of 2020, the RTX 30 Series has been our best gaming product cycle ever. The gaming industry has grown tremendously with 100 million new PC gamers added in the past two years according to Newzoo and NVIDIA RTX has set new standard for the industry with demand from both first time GPU buyers as well as those upgrading their PCs to experience the 250 plus RTX optimized games and apps, double from last year.

We estimate that almost a third of the GeForce gaming GPU installed base is now on RTX. RTX has brought tremendous energy into the gaming world and has helped drive a sustained expansion in our higher-end platforms and installed base with significant runway still ahead. Overall end demand remained solid through — though mixed by region and demand in Americas remained strong. However, we started seeing softness in parts of Europe related to the war in the Ukraine and parts of China due to the COVID lockdowns.

As we expect some ongoing impact as we prepare for a new architectural transition later in the year, we are projecting gaming revenue to decline sequentially in Q2. Channel inventory has nearly normalized and we expect it to remain around these levels in Q2. The extent in which cryptocurrency mining contributed to gaming demand is difficult for us to quantify with any reasonable degree of precision. The reduced pace of increase in Ethereum network hash rate likely reflects lower mining activity on GPUs. We expect a diminishing contribution going forward.

Laptop gaming revenue posted strong sequential and year-on-year growth, driven by the ramp of the NVIDIA RTX 30 Series lineup. With this year’s spring refresh and ahead of the upcoming back-to-school season, there are now over 180 laptop models featuring RTX 30 Series GPUs and our energy efficient thin and light Max-Q technologies, up from 140 at this time last year. Driving this growth are not just gamers, but also the fast growing category of content creators from whom we offer dedicated NVIDIA Studio drivers.

We’ve also developed applications and tools to empower artists from Omniverse for advanced 3D and collaboration to broadcast for live streaming to canvas for painting landscapes with AI. The creator economy is estimated at $100 billion and powered by 80 million individual creators and broadcasters. We continued to build out our GeForce NOW cloud gaming service. Gamers can now access RTX 3080-class streaming, our new top-tier offering with subscription plan of $19.99 a month. We added over 100 games to the GeForce NOW library, bringing the total to over 1,300 games and last week, we launched Fortnite on GeForce NOW with touch controls for mobile devices, streaming through the Safari web browser, on iOS and the GeForce NOW Android app.

Moving to Pro Visualization. Q1 revenue was $622 million, was down sequentially 3% and up 67% from a year ago. Demand remained strong as enterprises continued to build out their employee’s remote office infrastructure to support hybrid work. Sequential growth in the mobile workstations GPUs was offset by lower desktop revenue. Strong year-on-year growth was supported by the NVIDIA RTX Ampere architecture product cycle. Top use cases include digital content creation at customers such as Sony Pictures Animation and medical imaging at customers such as Medtronic.

In just its second quarter of general availability, our Omniverse enterprise software is being adopted by some of the world’s largest companies. Amazon is using Omniverse to create digital twins to better optimize warehouse design and flow and to train more intelligent robots, Kroger is using Omniverse to optimize store efficiency with digital twin store simulation and PepsiCo is using Omniverse digital twins to improve the efficiency and environmental sustainability of its supply chain. Omniverse is also expanding our GPU sales pipeline, driving higher end and multiple GPU configurations. The Omniverse ecosystem continues to rapidly expand with third-party developers in the robotics, industrial automation, 3D design and rendering ecosystems, developing connections to Omniverse.

Moving to Automotive. Q1 revenue of $138 million, increased 10% sequentially and declined 10% from the year ago quarter. Our DRIVE Orin SoC is now in production and kicks off a major product cycle with auto customers ramping in Q2 and beyond. Orin has great traction in the marketplace with over 35 customer wins from automakers, truck makers and robo-taxi companies. In Q1, BYD, China’s largest EV maker and Lucid, an award-winning EV pioneer were the latest to announce that they are building their next-generation fleet on DRIVE Orin. Our automotive design win pipeline now exceeds $11 billion over the next six years, up from $8 billion just a year ago.

Moving to Data Center. Record revenue of $3.8 billion, grew 15% sequentially and accelerated to 83% growth year-on-year. Revenue from hyperscale and cloud computing customers more than doubled year-on-year, driven by strong demand for both external and internal workloads. Customers remain supply constraint in their infrastructure needs and continue to add capacity as they tried to keep pace with demand. Revenue from vertical industries grew a strong double-digit percentage from last year. Top verticals driving growth this quarter include consumer Internet companies, financial services and telecom. Overall Data Center growth was driven primarily by strong adoption of our A100 GPU for both training and inference with large volume deployments by hyperscale customers and broadening adoption across the vertical industries. Top workloads include recommender systems, Conversational AI, large language models and cloud graphics.

Networking revenue accelerated on strong broad-based demand for our next-generation 2550 and 100 gig Ethernet adapters. Customers are choosing NVIDIA’s networking products for their leading performance and robust software functionality. In addition, Networking revenue is benefiting from growing demand for DGX SuperPOD and cross-selling opportunities. Customers are increasingly combining our compute and networking products to build what are essentially modern AI factories with data as the raw material input and intelligence as the output. Our networking products are still supply constrained though we expect continued improvement throughout the rest of the year.

One of the biggest workloads driving adoption of NVIDIA AI is Natural Language Processing, which has been revolutionized by transformer-based models. Recent industry breakthroughs traced to transformers include large language models like GPT-3, NVIDIA Megatron [Phonetic] BERT for drug discovery and DeepMind AlphaFold for protein structure prediction. Transformers allow self-supervised learning without the need for human labeled data. They enable unprecedented levels of accuracy for tasks such as text generation, translation, summarization and answering questions.

To do that, transformers use enormous training data sets and very large neural networks well into the hundreds of billions of parameters. To run these giant models without sacrificing low inference times, customers like Microsoft are increasingly deploying NVIDIA AI, including our NVIDIA Ampere architecture-based GPUs and full software stack. In addition, we are seeing a rising wave of customer innovation using large language models that is driven by increased demand for NVIDIA AI and GPU instances in the cloud.

At GTC, we announced our next-generation Data Center GPU, the H100 based on the new Hopper Architecture. Pact with 80 billion transistors, H100 is the world’s largest most powerful accelerator offering an order of magnitude leap in performance over the A100. We believe H100 is hitting the market at the perfect time. H100 is ideal for advancing large language models and deep recommender systems, the two largest scale AI workloads today. We are working with leading server makers and hyperscale customers to qualify and ramp H100 as well as the new DJX H100 AI computing system will ramp in volume late in the calendar year.

Building on the H100 product cycle goal, we are on track to launch our first ever Data Center CPU, Grace in the first half of 2023. Grace is the ideal CPU for AI factories. This week at COMPUTEX, we announced that dozens of server models based on Grace will be brought to market by the first wave of system builders, including ASUS, Foxconn, GIGABYTE, QCT, Supermicro and Wiwynn. These servers will be powered by the NVIDIA Grace CPU Superchip, which features two CPUs and the Grace Hopper Superchip, which pairs an NVIDIA Hopper GPU with an NVIDIA Grace CPU in an integrated model.

We’ve introduced new reference designs based on Grace for the massive new workflows of next-generation Data Centers. CGX for cloud graphics and gaming, OVX for digital twins or Omniverse and HDX for HPC and AI. These server designs are all optimized for NVIDIA’s rich accelerated computing software stacks and can be qualified as part of our NVIDIA certified systems lineup. The enabler for the Grace Hopper and Grace Superchips is our ultra-energy efficient, low latency, high speed memory coherent interconnect called NVLink, which scales from die to die, chip to chip and system to system. With NVLink, we can configure Grace and Hopper to address a broad range of workloads. Future NVIDIA chips, the CPUs, GPUs, DPUs, NICs and SoCs, will integrate NVLink just like Grace and Hopper based on our world-class SerDes technology. We are making NVLink open to customers and partners to implement custom chips that connect to NVIDIA’s platforms.

In Networking, we’re kicking off a major product cycle with the introduction of Spectrum-4, the world’s first 400-gigabit per second end-to-end Ethernet networking platform, including the Spectrum-4 switch, ConnectX-7 SmartNIC, BlueField-3 DPU and the DOCA software. Built for AI, NVIDIA Spectrum-4 arrives as data centers are growing exponentially and demanding extreme performance, advanced security and powerful features to enable high performance advanced virtualization and simulation at scale. Across our businesses, we are launching multiple new GPU, CPU, DPU and SoC products over the coming quarters with a ramp in supply to support the customer demand.

Moving to the rest of the P&L. GAAP gross margin for the first quarter was 65.5% and non-GAAP gross margin was up 67.1%, up 90 basis points from a year ago and up 10 basis points sequentially. We’ve been able to offset rising costs and supply chain pressures. We expect to maintain gross margins at current levels in Q2. Going forward, as new products ramp and software becomes a larger percent of revenue, we have opportunities to increase gross margins longer term. GAAP operating margin was 22.5%, impacted by a $1.35 billion acquisition termination charge related to the Arm transaction. Non-GAAP operating margin was 47.7%. We are closely managing our operating expenses to balance the current macro environment with our growth opportunities and we’ve been very successful in hiring so far this year and are now slowing to integrate these new employees. This also enables us to focus our budget and taking care of our existing employees as inflation persist.

We are still on track to grow our non-GAAP operating expenses in the high-20s range this year. We expect sequential increases to level off after Q2 as the first half of the year includes a significant amount of expenses related to the bring up of multiple new products, which should not reoccur in the second half. During Q1, we repurchased $2 billion of our stock. Our Board of Directors increased and extended our share repurchase program to repurchase an additional common stock up to a total of $15 billion through December 2023.

Let me now turn to the outlook for the second quarter of fiscal 2023. Our outlook assumes an estimated impact of approximately $500 million relating to Russia and China COVID lockdowns. We estimate the impact of lower sell-through in Russia and China to affect our Q2 gaming sell-in by $400 million. Furthermore, we estimate the absence of sales to Russia to have a $100 million impact on Q2 in Data Center. We expect strong sequential growth in Data Center and Automotive to be more than an offset by the sequential decline in Gaming.

Revenue is expected to be $8.1 billion plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 65.1% and 67.1%, respectively plus or minus 50 basis points. GAAP operating expenses are expected to be $2.46 billion. Non-GAAP operating expenses are expected to be $1.75 billion. GAAP and non-GAAP, other income and expenses are expected to be an expense of approximately $40 million, excluding gains and losses on non-affiliated investments. GAAP and non-GAAP tax rates are expected to be 12.5%, plus or minus 1%, excluding discrete items. And capital expenditures are expected to be approximately $400 million to $450 million.

Further financial details are included in the CFO commentary and other information available on our IR website. In closing, let me highlight the upcoming events for the financial community. We will be attending the BofA Securities Technology Conference in person on June 7, where Jensen will participate in a keynote fireside chat. Our earnings call to discuss the results of our second quarter of fiscal 2023 is scheduled for Wednesday, August 24.

We will now open the call for questions. Operator, could you please poll for questions? Thank you.

Questions and Answers:

Operator

Thank you. [Operator Instructions] We’ll take our first question from CJ Muse with Evercore ISI. Your line is open.

C.J. Muse — Evercore ISI — Analyst

Yeah, good afternoon. Thank you for taking the question. I guess would love to get an update on how you’re thinking about the gaming cycle from here. The business is essentially doubled over the last two years and now we’ve got some cross-wins with crypto falling off, channel potentially clearing ahead of a new product cycle, you talked about macro challenges, but at the same time, only a third of the installed base has RTX and we’re moving out from under supply. So, would love to hear your thoughts from here once we get beyond kind of the challenges around COVID lockdown in the July quarter. How you’re thinking about gaming trends?

Jensen Huang — Founder, President and CEO

Yeah, C.J. Thanks for the question. The — you captured a lot of the dynamics well in your question. The underlying dynamics of the gaming industry is really solid net of, net of the the situation with COVID lockdown in China, in Russia, the rest of the market is fairly robust and we expect, we expect the gaming dynamics to be intact. The civil things that are driving the gaming industry, in the last two years alone, 100 million new gamers came into the PC industry.

The format has expanded tremendously and the ways that people are using their PCs to, to connect with friends, to, to be an influencer as a platform for themselves, use it for broadcast. So many people are now using their home PCs as their second workstation, if you will, second studio because they’re also working from home. It is our primary way of communicating these days. The need for GeForce PCs have never been greater. And so I think the fundamental dynamics are really good. And so as we look into, look into the second half of the year, we look, it’s hard to predict exactly what — when COVID and the war in Russia is going to, going to be behind us. But nonetheless, the governing dynamics of the gaming industry is great.

Operator

Next we’ll go to Matt Ramsay with Cowen. Your line is up.

Matt Ramsay — Cowen and Company — Analyst

Thank you very much. Good afternoon. Jensen, I wanted to ask a bit of a question on the Data Center business. In this upcoming cycle with H100 there is some IO upgrades that are happening in servers that I think are going to be a fairly strong driver for you in addition to what’s going on with Hopper and the huge performance fleets that are there. I wanted to ask a longer-term question though around your move to NVLink with Grace and Hopper and what’s going on with your whole portfolio. Do you envision the business continuing to be sort of car driven attached to third-party servers or do you think revenue shifts dramatically or in a small way over time, to be more sort of vertically integrated all of the chips together on NVLink and how is the industry sort of responding to that potential move? Thanks.

Jensen Huang — Founder, President and CEO

Yeah. I appreciate the question. Let’s see, the first point that you made is a very big point. The next-generation of servers that are being teed up right now are all Gen5. The IO performance is substantially higher than what was available before and so, so you’re going to see a pretty large refresh as a result of that. Brand new networking cards from our company and others, Gen5, of course drives new platform refresh and so we’re a perfectly timed to ramp into the Gen5 generation with, with Hopper.

There are a lot of different system configurations you want to make. If you take a step back and look at the type of systems that are necessary for data processing, scientific computing, machine learning and training, inference done in the cloud for hyperscale nature, done on-prem for enterprise computing, done at the Edge, each one of these workloads and deployment locations, the way that you manage would dictate a different system architecture. So, there isn’t one size that fits all, which is one of the, one of the reasons why it’s so terrific that we support PCI Express that we innovated chip-to-chip interconnect for diverse before anybody else did, this is Nelson seven years ago. We’re in our fourth generation of NVLink that allows us to connect two chips next to each other, two dies, two chips, two modules, two SXM modules to two systems to multiple systems.

And so our coherent chip-to-chip link, NVLink has made it possible for us to mix and match chips, dies, packages, systems in all of these different types of configurations. And I think that over time you’re going to see even more types of configurations. And the reason for that has to do with, with a couple of very important new type of Data Centers that are emerging and you start — you’re starting to see that now with fairly large installations infrastructures with NVIDIA HPC and NVIDIA AI. These are really AI factories where you’re processing the data, refining the data and turning that data into intelligence. These AI factories are essentially running one major workload and they’re running at 24/7. Deep recommender systems is a good example of that. In the future you’re going to see large language models essentially becoming a platform themselves that would be running 24/7, hosting a whole bunch of applications.

And then on the other hand, you’re seeing data centers at the Edge that are going to be robotics or autonomous data centers that are running 24/7. They are going to be running in factories, in retail stores, in warehouses, logistics warehouses, all over the world. So, these two new type of data centers are just emerging and they also have different architecture. So, I think the net of it all is that our ability to support every single workload because we have a universal accelerator, we’re running every single workload from data processing to data analytics to high performance computing to print to inference that we can support ARM and x86, hat we support PCI Express, to multi-system NVLink to multi-chip NVLink to multi-die NVLink. That capability for us is, makes it possible for us to really be able to serve all of these different segments.

With respect to — with respect to vertical integration, I think that that system integration, the better way to may be saying that is that system integration is going to come in all kinds of different ways. We’re going to do semi custom chips as we’ve done with many companies in the past, including Nintendo. We’ll do semi-custom chiplets as we do with NVLink. NVLink is open to our partners and they could bring it to any fab and connect it coherently into our chip, we could do multi-module packages, we can do multi-package systems. So, there is a lot of different ways to do system integration.

Operator

Next we’ll go to a Stacy Rasgon with Bernstein Research. Your line is now open.

Stacy Rasgon — Bernstein Research — Analyst

Hi guys, thanks for taking my question. I wanted to follow-up on the sequential. So, Colette, I know you said the $500 million was a $400 million hit to gaming and a $100 million hit to Data Center. I’m assuming that, that doesn’t mean that Gaming is down $400 million. I mean is Gaming, you see Gaming actually down more than the actual Russia and lockdown hit? And I guess just how do I think about the relative sequentials of the businesses in light of those constraints that you guys are facing?

Colette Kress — EVP and Chief Financial Officer

Sure. Let me, let me start first with, what does that mean to Gaming. What does that mean to Gaming for Q2. We do expect Gaming to decline into Q2. We still believe our end demand remains very strong. Ampere has just been a great architecture and there’s many areas where we continue to see strength and growth in both our sell-through and probably what we will see added into that channel as well. But in total, Q2 Gaming will decline from last quarter from Q1 that it will probably decline in the teens as we try and work through some of these lockdowns in China, which are, which are holding us up. So, overall the demand for Gaming is still strong. We still expect and demand to grow year-over-year in Q2.

Operator

Next we’ll go to Mark Lipacis with Jefferies. Your line is open.

Mark Lipacis — Jefferies — Analyst

Hi, thanks for taking my question. If you listen to — if you listen to the networking OEMs this earning season, it seems that there was a lot of talk about increased spending by enterprises on their data centers and sometimes you hear them talking about how this has been driven by AI. You talked about your year-over-year growth in your cloud versus enterprise spending. I wonder if you could talk about what you were seeing sequentially? Are you seeing an sequential inflection in enterprise and can you talk about the attach rate of software for our enterprise versus data centers and what — which software is, are you seeing the most interest? I know you talked about it. Is it Omniverse, is it Natural Language Processing or is there one big driver or is there bunch of drivers for the various different software packages you have? Thank you.

Jensen Huang — Founder, President and CEO

Yeah, thanks. Thanks, Mark. We had a record Data Center business this last quarter. We expect to have a record — another record quarter this quarter. And and we’re fairly enthusiastic about the second half. AI — AI and data-driven machine learning techniques were developed for writing software and extracting insight from the vast amount of data that companies have is incredibly strategic to all the companies that we know, because in the final analysis, AI is about automation of intelligence and most companies are about domain specific intelligence. We want to produce intelligence.

And there are several techniques now that have been created to make it possible for most companies to apply their data to to extract insight and to automate a lot of the predictive things that they have to do and do it quickly. And so I think the, the trend that you hear other people experiencing about machine learning, data analytics, data-driven insights, artificial intelligence, however, as described, is all exactly the same thing and its sweeping just about every industry in every company.

Our Networking business is also highly, highly supply constrained where demand is really, really high and it requires a lot of components aside from just our chips. Components and transceivers and connectors and cables and just, it’s a really it’s a complicated system, the network and there are many physical components. And so, so the supply chain has been problematic. We’re doing our best and our supply has been increasing from Q4 to Q1, we’re expecting it to increase in Q2 and increase in Q3 and Q4. And so we’re, we’re really, really grateful for the support from the the component industry around us and we’ll be able to increase that.

With respect to software, there are two, there are 2 — well, first of all, there are all kinds of machine learning models; computer vision, speech AI, natural language understanding, all kinds of robotics applications. The most, the most probably the largest, the most visible one is self-driving cars, which is essentially a robotic AI and then recently, this incredible breakthrough from AI model called transformers that has led to really, really significant advances in natural language understanding. And so, they’re all these different types of models. There are thousands and thousands of species of AI models and used in all these different industries.

One of my favorite, I’ll just say very quickly and I’ll answer the question about the software. One of my favorites is using transformers to understand the language chemistry or using transformers and using AI models to understand the language of proteins, amino acids, which is genomics to apply AI to understand to recognize the patterns to understand the sequence and essentially understand the language of chemistry and biology is really, really important breakthrough. And all of this excitement around synthetic biology, much of it stems back to these, some of these inventions.

But anyhow, all of these different models need an engine to run on and that engine is called NVIDIA AI. In the case of hyperscalers they can cobble together a lot of open source and we provide a lot of our source to them and a lot of our engines to them for them to operate their AI but for enterprises, they need someone to package it together and be able to support it and refresh it, updated for new architecture, support old architectures in their installed base, et cetera and all the different use cases that they have. And so that engine is called NVIDIA AI. It’s almost like a sequel engine, if you will and except this is an engine for artificial intelligence.

There is another engine that we provide and that engine is called Omniverse and it’s designed for the next wave of AI where artificial intelligence has to not just manipulate information like recommender systems and conversational systems and such, but it has to interact with physical systems. Whether it’s interacting with physics directly, meeting robotics or being able to automate physical systems like heat recovery steam generators, which is really important today. And so Omniverse is designed to be able to sit at the interface, the intersection between simulation in artificial intelligence and this is what Omniverse is about.

Omniverse has now, let’s see some, we’re still early in the deployment of Omniverse for our commercial license. It’s been a couple of quarters now since we’ve released Omniverse enterprise. And I think at this point we have 10% of the world’s top 100 companies that are already customers, licensing customers, substantially more who are evaluating. I think it’s been downloaded nearly 200,000 times. It is being tried in some 700 companies and Colette highlighted some of the companies, you might see some of the companies that are using it in all kinds of interesting applications at GTC. And so I fully expect that the NVIDIA AI engine, the Omniverse engine, are going to be very successful for us in the future and contribute greatly to our earnings.

Operator

Next we’ll go to Vivek Arya with BofA Securities. Your line is open.

Vivek Arya — BofA Securities — Analyst

Thanks. Just wanted to clarify Colette, if your Q2 outlook includes any restocking benefit from the new products that you’re planning to launch this year? And then Jensen, my question is for you, you’re still guiding Data Center to a very strong I think close to 70% or so year-on-year growth despite all the headwinds. Are you worried at all about all the headlines about the slowdown in the macro economy? Like is there any cyclical impact on Data Center growth that we should keep in mind as we think about the second half of the year?

Colette Kress — EVP and Chief Financial Officer

Yeah, let me first answer the question that you asked regarding any new product as we look at Q2. As we discuss about it, most of the ramp that we have of our new architectures, we’re going to see in the back half of the year. We’re going to start to see for example, Hopper will probably be here in Q3, but starting to ramp closer to the end of the calendar year. So, you should think about most of our product launches to be ramping in the second half of the year on that part.

I’ll turn it over for Jensen for the rest.

Jensen Huang — Founder, President and CEO

Thanks. Our Data Center demand is strong and remains strong. Hyperscale and cloud computing revenues, as you mentioned, has grown significantly, has doubled year-over-year. And we’re seeing really strong adoption of A100. A100 is really quite special and unique in the world of accelerators and this is one of the really, really great innovations as we extended our GPU from graphics to CUDA to Tensor Core GPUs. It’s now a universal accelerator. And so you could use it for data processing, for ETL, for example, Extract, Transform and Load. You could use it for database acceleration. Many SQL functions are accelerated on NVIDIA GPUs, we accelerate RAPIDS, we accelerate, which is the Python version, data center scale version of Pandas, we accelerate Spark 3.0.

And so from database queries to data processing to extraction and transform and loading of data before you do training and inference and whatever image processing or other algorithmic processing you need to do, can be fully accelerated on A100 and so we’re seeing great success there. At the core and closer to what is happening today, you’re seeing several different very important new AI models that are being invested in at very, very large scale and with great urgency. You probably have heard about Deep recommender systems. This is the economic engine, the information filtering engine of the Internet.

If not for the recommender system, it would be practically impossible for us to enjoy our Internet experience, shopping experience with trillions of things that are changing in the world every day constantly and be able to use your 3-inch phone to even engage the Internet. And so all of that magic is made possible by this incredible thing called a recommender system. The second thing is Conversational AI. You’re seeing the chatbots and website customer service, even live customer service being now supported by AI, Conversational AI, as an opportunity to enhance the customer service on the one hand. On the other hand supplement for a lot of, a lot of labor shortage.

And then the third is this groundbreaking piece of work that’s related to transformers that led to natural language understanding breakthrough. But within it, is this incredible thing called large language models, which embeds human knowledge because it’s been trained in so much data and we recently announced Megatron 530B and it was a collaboration we do with Microsoft, the foundation of I think they call it Turing. And this language model and others like it, like open AI, Distributed 3 [Phonetic] are really transformative and they take an enormous amount of computation. However, the net result is a pre-trained model that is really quite, quite remarkable.

Now, we’re working with thousands of startups, large companies that are building who are using the public cloud and so it’s driving a lot of demand for us in the public cloud. I think we have now 10,000 AI inception startups that are working with us and using NVIDIA AI, whether it’s on-prem or in the cloud. It saves money because the computation time is significantly reduced, the quality of service is a lot better and they could do greater things. And so that’s driving AI in the cloud. And so all of these, all of these different factors, whether it’s just the industrial recognition of the importance of AI, the transformative nature of these new AI models, recommender systems, large language models, Conversational AI, the thousands of companies around the world that are using NVIDIA AI in the cloud, driving public cloud, demand, all of these are driving our Data Center growth. And so, so we expect to see Data Center demand remain strong.

Operator

Next we’ll go to Tim Arcuri with UBS. Your line is open.

Tim Arcuri — UBS — Analyst

Thank you very much. I had a question about this $500 million impact for July and whether it’s more supply related or demand related? And that’s because most other and semis are sort of citing this China stuff in particular as more of a logistics issue. So, more of a supply issue, but the language, Colette you were using in your commentary cited lower sell-through in Gaming and sort of the absence of sales in Russia. To me that sounds a little more demand, which would make sense in the context of this new freeze on hiring that you have. So, I ask because if it’s supply related, then you could argue that it’s not perishable and really just timing but if demand related that might never come back and it could be the beginning of a falling knife. So, I wonder if you can sort of walk through that for me. Thanks.

Colette Kress — EVP and Chief Financial Officer

Thanks, Tim for the question. Let me try and bet here on the China and Russia, two very different things. The current China lockdowns that we are seeing interestingly has implications to both supply and demand. We have seen challenges in terms of the logistics throughout the country, things going in, out of the country, it puts a lot of pressure on just logistics that were already under pressure. From a demand perspective, it is also been hit from the Gaming side. You have a very large cities that are in full lockdown focusing really on other important things for the citizens there. So, it’s impacting our demand.

We do believe that they will come out of COVID and the demand for our products will come back. We do believe that will occur. The supply will sort it out, it’s very difficult to determine how. Now in the case of Russia, we’re not selling to Russia. That’s something that we had announced earlier last quarter, but there were plans in Russia, has been a part of our overall company revenue, probably about 2% of our company revenue historically and a little larger percentage when you look at our Gaming business. Hope that helps.

Operator

Next we’ll go to Ambrish Srivastava with BMO. Your line is now open.

Ambrish Srivastava — BMO — Analyst

Hi, thank you very much Colette and Jensen and I actually really appreciated that you call out demand, for most chip companies, it seems like it’s healthy [Phonetic] to say demand is a problem, so refreshing to hear that. I had a question on the second half and it relates to both Data Center as well as Gaming. So, last couple of times you’ve talked publicly, you have made comments that your visibility into the Data Center has never been better. So, I was wondering if you just take out the Russia impact. Is that still true all the orders that you have been getting intact and you did say that business would see a strong momentum. I just wanted to make sure that statement of confidence you have made stays? And then on Gaming, Colette, do we expect second half to be up year-over-year just based on the guide for second quarter? It seems like it could be up sequentially, but may not return to year-over-year growth in Q3 [Phonetic]. Thank you.

Jensen Huang — Founder, President and CEO

Yeah, Ambrish, thanks for your question. On first principles, this should be the case that our visibility of Data Centers is vastly better, vastly better than a couple of years ago. And the reason for that is several. One, if you recall a couple, two, three years ago, Deep learning and AI was starting to accelerate in the most computer science companies of the world with CSPs and hyperscalers and — but just about everywhere else, it was still quite nascent and there was a couple of reasons for that.

Obviously the understanding of the technology is not as, not as, not as pervasive at the time. The type of industrial use cases for artificial intelligence requires labeling of data that’s, that’s really quite difficult. And then now with transformers you have unsupervised learning and other techniques, zero shot learning that allows us to, to do all kinds of interesting things without having to — have human label data. We even have synthetic generated data with Omniverse that helps customers do data generation without having to label data, which is either too cost effect, too costly or quite frankly oftentimes impossible.

And so now the knowledge and the technology has evolved to a place that most of the industries could use artificial intelligence at a very fairly effective way and in many industries rather transformative. And so I think number one, we went from cloud and hyperscalers to all of industries. Second, we went from training focused to inference. Most people thought that inference is going to be easy. It turns out inference is by far the harder and the reason for that is because there’s so many different models and there’s so many different use cases and so many quality of service requirements and you want to run these inference models in a small of a footprint as you can.

And so, when you scale out the number of users that use the service is really quite high. So, using acceleration and using NVIDIA’s platform we could inference any model from computer vision, the speech to chemistry to biology, you name it. And we do it so quickly and so fast that the cost is very low and so the more, the more acceleration you do, the more money you will save and that I think that wisdom is absolutely true. And so the second second dimension is training to inference. The third dimension is that, that we now have so many different types of configurations of systems that we can go from from high performance computing systems all the way to cloud, to on-prem, to Edge and then, and then, and then the final concept is really this industrial deployment now of AI that’s causing us to be able to in just about every industry to find growth.

And so, as you know, our cloud and hyperscalers are growing very, very quickly. However, the vertical part, vertical industries, which is the financial services and retail and telco and all of those, all of those vertical industries have also grown very, very nicely. And so in all of those different dimensions, our visibility should be a lot better. And then starting a couple years ago, adding the Mellanox portfolio to our company, we’re able to provide a lot more solution-oriented end-to-end platform solutions for companies that don’t have the skills and don’t have the technical depth to be able to stand up these sophisticated systems and so our networking business is growing very, very nicely as well.

Operator

Next we’ll go to Harlan Sur with J.P. Morgan. Your line is open.

Harlan Sur — JP Morgan — Analyst

Hi, good afternoon. Thanks for let me ask a question. I just wanted to maybe just ask this question a little bit more directly. So, it’s good to see the team being able to drive, navigate the dynamic supply chain environment, right. You booked strong sequential growth in Data Center in April. Here in the July quarter even with some demand impact from Russia, right. And so as we think about the second half of the year, cloud spending is strong and it’s actually I think accelerating. You’re getting ready to ramp H100 later in the year. Mellanox, I think is getting more supply as you move through the year. And in general, I think previously, you guys were anticipating sequential supply and revenue growth for the business through this entire year. I understand the uncertainty around Gaming. But does the team expect continued sequential growth in Data Centers through the remainder of the year?

Jensen Huang — Founder, President and CEO

Either one of those — the answer is yes. The answer is yes. We see a strong demand in Data Center. Hyperscale, the cloud computing to vertical industries. Ampere is going to continue to scale out, it’s been qualified in every single company in the world. And so, so after two years, it remains the best universal accelerator on the planet and it’s going to continue to scale out in all these different, different, different domains and different markets. We are going to layer on top of that a brand new architecture, Hopper. We’re going to layer on top of that a brand new networking architecture Quantum-3, CX-7, BlueField-3 and we have increasing supply. And so we’re looking forward to an excellent quarter next quarter again for Data Centers and going into the second half.

Operator

Next we’ll go to Chris Caso with Raymond James. Your line is open.

Chris Caso — Raymond James — Analyst

Yes, thank you. I wonder if you could speak a little bit about the purchase obligations which seem like they were up again in the quarter? And how that — was that a function of longer-dated obligations or a higher magnitude of obligations? And maybe if you could just speak to supply constraints in general. You’ve mentioned a couple times in the call about continued constraints in Networking business. What about the other parts of the business? Where are you still constrained?

Colette Kress — EVP and Chief Financial Officer

Yeah, so let me start here and I’ll see if Jensen wants to add more of that. Our purchase obligations as well as our prepaid have two major things to keep in mind. One, for the first time ever we are prepaying to make sure that we have that supply and those commitments long term. And additionally, on our purchase obligations, many of them are for long lead-time items that are a must for us to procure to make sure that we have the products coming to market.

A good percentage of our purchase commitments is for our Data Center business, which you can imagine are much larger systems, much more complex systems and those things that we are procuring to make sure we can feed the demand both in the upcoming quarters and further. Areas in terms of where we are still a little bit of supply constrained. Our Networking, our demand is quite strong. We’ve been improving it [Indecipherable]. But yes, we still have demand, excuse me, supply concerns with Networking still.

There are others that you want to add on, Jensen?

Jensen Huang — Founder, President and CEO

No, I tell you were perfect. It was perfect.

Operator

Our final question comes from Aaron Rakers with Wells Fargo. Your line is open.

Aaron Rakers — Wells Fargo — Analyst

Yeah, thanks for fitting me in. And most of my question is around Gaming and Data Center, but answered, but I guess I’ll ask about the Auto segment. While it’s still small, clearly you guys sound confident in that business starting to see quote-unquote significant sequential growth into this next quarter. I wondered if you could help us kind of think about the trajectory of that business over the next couple of quarters? And I think in the past you’ve said that that should start to really inflect higher as we move into the second half of the year. Just curious if you could help us think about that, that piece of the business.

Jensen Huang — Founder, President and CEO

Several data points. We are just starting, we have just started shipping Orin in the first quarter of shipping production Orin. Orin is a robotics processor, is designed for a software-defined robotic car or robotic pick and placer or robotic mover, logistics mover. We’ve been designed into 35 car and trucks and robo-taxi companies and more others if you include logistics movers and last-mile delivery systems and farming equipment and the number of design wins for Orin is really quite fantastic.

Orin is a revolutionary processor and it’s a, it’s designed as a, if you will, Data Center on a Chip. And it is the first Data Center on a Chip that is robotic, processes sensor information, it’s safe, it has the ability to be rather resilient, it has confidential computing. It is designed to be secure, designed to be all those things because these Data Centers are going to be everywhere, and so Orin is really a technological marvel in production. We probably — we experienced very likely the the lowest Auto quarter in some time, for some time. And the reason for that is because over the next six years or so, we have $11 billion and counting of business that we’ve secured estimated and so I think it’s a, it’s a fairly safe thing to say now that Orin and our autonomous vehicle and robotics business is going to be our next multi-billion dollar business, it’s on its way surely there.

The robotics and autonomous systems and autonomous machines, whether they move or not move, but AI systems that are, that are at the physical edge is surely going to be the next major computing segment. It is surely going to be the next major Data Center segment. We’ve been working in this area as you know for a decade and we have a fair amount of expertise in this area and Orin is just one example of our work here. We have four pillars to our strategy for autonomous systems, starting from the data processing and the AI training part of it to train robotics AI.

Second, to simulate robotics AIs, which is Omniverse. Third, to [Technical Issues] the memory of the robotics AI otherwise known as mapping and then finally, the actual robotics application and the robotics processor in the system and that’s where Orin goes. But Orin is just one of our four pillars of our robotics strategy in the next wave of AI and so I am really optimistic and really enthusiastic about the next phase of the computer industry’s growth and I think a lot of it’s going to be at the Edge, a lot of it’s going to be about robotics.

Operator

Thank you. I will now turn it back over to Jensen Huang for any additional closing remarks.

Jensen Huang — Founder, President and CEO

Thanks, everyone. The full impact and duration of the war in Ukraine and COVID lockdowns in China, is difficult to predict. However, the impact of our technology and our market opportunities remain unchanged. The effectiveness of Deep learning AI continues to astound, the transformer model, which led to the natural language understanding breakthroughs has been advanced to learn patterns with great spatial, sequential and temporal complexity. Researchers are creating transformer models that are revolutionizing applications from robotics to drug discovery. The effectiveness of Deep learning AI is driving companies across industries to adopt NVIDIA for AI computing.

We’re focused on four major initiatives. First, ramping our next-generation of AI infrastructure chips and platforms, Hopper GPU, BlueField DPU, NVLink, InfiniBand — Quantum InfiniBand, Spectrum Ethernet networking. And all this to help customers build their AI factories and take advantage of new AI breakthroughs like transformers. Second, ramping our system and software industry partners to launch Grace, our first CPU. Third, ramping Orin, our new robotics processor and nearly 40 customers building cars, robo-taxis, trucks, delivery robots, logistics robots, farming robots to medical instruments. And fourth, with our software platforms, adding new value to our ecosystem with NVIDIA AI and NVIDIA Omniverse and expanding into new markets with new CUDA acceleration libraries. These initiatives will greatly advance AI and while continuing to extend this most impactful technology of our time to scientists in every field and companies in every industry.

We look forward to updating you on our progress next quarter. Thank you.

Operator

[Operator Closing Remarks]

Related Post