Advanced Micro Devices Inc (NASDAQ: AMD) Q4 2025 Earnings Call dated Feb. 03, 2026
Corporate Participants:
Matthew D. Ramsay — Head, Investor Relations
Lisa Su — Chair and Chief Executive Officer
Jean Hu — Executive Vice President, Chief Financial Officer and Treasurer
Analysts:
Aaron Rakers — Analyst
Timothy Arcuri — Analyst
Vivek Arya — Analyst
C.J. Muse — Analyst
Joseph Moore — Analyst
Stacy Rasgon — Analyst
Joshua Buchalter — Analyst
Ben Reitzes — Analyst
Thomas O’Malley — Analyst
Ross Seymore — Analyst
Jim Schneider — Analyst
Presentation:
operator
Greetings and welcome to the AMD fourth quarter and full year 2025 conference call. this time all participants are in a listen only mode. A question and answer session will follow the formal presentation. If anyone should require operator assistance during the conference, please press Star0 on your telephone keypad and please note that this conference is being recorded. I will now turn the conference over to Matt Ramsey, VP of Financial Strategy and ir. Thank you. You may begin.
Matthew D. Ramsay — Head, Investor Relations
Thank you and welcome to AMD’s fourth quarter and 2025 full year financial Results Conference call. By now you should have had the opportunity to review a copy of our earnings press release and accompanying slides. If you have not had the opportunity to review these materials, they can be found on the Investor relations page of amd.com today. We will refer primarily to non GAAP financial measures on the call. The full non GAAP to GAAP reconciliations are available in today’s press release and in the slides posted on our website. Participants in today’s conference call are Dr. Lisa Su, our chair and CEO, and Jean Hu, our Executive Vice President, CFO and Treasurer.
This is a live call and will be replayed via webcast on our website. Before we begin, I would like to note that Mark Papermaster, Executive vice president and CTO, will present at Morgan Stanley’s TMT conference on Tuesday, March 3. Today’s discussions contain forward looking statements based on our current beliefs, assumptions and expectations, speak only as of today and as such involve risks and uncertainties that could cause actual results to differ materially from our current expectations. Please refer to the cautionary statement in our press release for more information on factors that could cause actual results to differ materially. With that, I will hand the call to Lisa.
Lisa Su — Chair and Chief Executive Officer
Thank you Matt and good afternoon to all those listening. Today 2025 was a defining year for AMD with record revenue, net income and free cash flow driven by broad based demand for our high performance computing and AI products. We ended the year with significant momentum with every part of our business performing very well. We saw demand accelerate across the data center, PC gaming and embedded markets launched the broadest set of leadership products in our history, gained significant server and PC processor share and rapidly scaled our data center AI business as Instinct and ROCM adoption increased with cloud, enterprise and AI customers.
Looking at our fourth quarter, fourth quarter revenue grew 34% year over year to 10.3 billion led by record EPYC, Ryzen and Instinct processor sales. Net income increased 42% to a record $2.5 billion and free cash flow nearly doubled year over year to a record $2.1 billion. For the full year, revenue grew 34% to $34.6 billion and we added more than $7.6 billion of data center segment and client revenue. Turning to our fourth quarter segment results, data center segment revenue increased 39% year over year to a record 5.4 billion led by accelerating Instinct Mi350 series GPU deployments and server share gains in server Adoption of 5th gen Epyc turn CPUs accelerated in the quarter, accounting for more than half of the total server revenue.
4th gen EPYC sales were also robust as our prior generation CPUs continued to deliver superior performance and TCO compared to competitive offerings across a wide range of workloads. As a result, we had record server CPU sales to both cloud and enterprise customers in the quarter and exited the year with record share in cloud. Hyperscaler demand was very strong as North American customers expanded deployments. Epic powered public cloud offerings grew significantly in the quarter with AWS, Google and others launching more than 230 new AMD instances. Hyperscalers launched more than 500AMD based instances in 2025, increasing the number of epic cloud instances more than 50% year over year to nearly 1,600.
In the enterprise, we are seeing a meaningful shift in epic adoption driven by our leadership performance, expanded platform availability, broad software enablement and increased go to market programs. The leading server providers now offer more than 3,000 solutions powered by 4th and 5th gen Epyc CPUs that are optimized for all major enterprise workloads. As a result, the number of large businesses deploying EPYC on prem more than doubled in 2025 and we exited the year with record server sell through. Looking ahead, server CPU demand remains very strong. Hyperscalers are expanding their infrastructure to meet growing demand for cloud services and AI while enterprises are modernizing their data centers to ensure they have the right COMPUTE required to enable new AI workflows.
Against this backdrop, Epic has become the processor of choice for the modern data center, delivering leadership performance, efficiency and tco. Our next generation Venice CPU extends our leadership across each of these metrics. Customer pull for Venice is very high with engagements underway to support large scale cloud deployments and broad OEM platform availability when Venice launches later this year. Turning to our data center AI business, we delivered record Instinct GPU revenue in the fourth quarter led by the ramp of Mi350 series shipments. We also had some revenue from Mi308 sales to customers in China. Instinct adoption broadened in the quarter today, eight of the top 10 AI companies use Instinct to power production workloads across a growing range of use cases.
With the Mi350 series, we are entering the next phase of Instinct adoption, expanding our footprint with existing partners and adding new customers in the fourth quarter. Hyperscalers expanded Mi350 series availability leading AI companies scale their deployments to support additional workloads and multiple NEO cloud providers launched Mi350 series offerings that deliver on demand access to Instinct infrastructure in the cloud. Turning to our AI software stack, we expanded the ROCM ecosystem in the fourth quarter enabling customers to deploy Instinct faster and with higher performance across a broader range of workloads. Millions of large language and multimodal models run out of the box on AMD with the leading models launching with Day Zero support for Instinct GPUs.
This capability highlights our rapidly expanding open source community enablement including new upstream integration of AMD GPUs in VLLM, one of the most widely used inference engines to drive Instinct adoption with industry specific use cases. We’re also adding support for domain specific models in key verticals. As one example, in healthcare we added RACM support for the leading medical imaging framework to enable developers to train and deploy highly performant deep learning models on instinct GPUs for large businesses. We introduced our Enterprise AI Suite, a full stack software platform with enterprise grade tools, inference microservices and solutions blueprints designed to simplify and accelerate production deployments at scale.
We also announced a strategic partnership with Tata Consultancy Services to co develop industry specific AI solutions and help customers deploy AI across their operations. Looking ahead, Customer engagements for Our Next Gen Mi 400 series and Helios platform continue expanding. In addition to our multi generation partnership with OpenAI to deploy 6 gigawatts of Instinct GPUs, we are in active discussions with other customers on at scale multi year deployments starting with Helios and Mi450. Later this year with the Mi400 series we are also expanding our portfolio to address the full range of cloud, HPC and enterprise AI workloads.
This includes Mi455X and Helios for AI superclusters, Mi430X for HPC and Sovereign AI and Mi440X servers for enterprise customers requiring leadership training and inference performance in a compact AGPU solution that integrates easily into existing infrastructure. Multiple OEMs publicly announced plans to launch Helios Systems in 2026 with deep engineering engagements underway to support smooth production ramps. In December, HPE announced that they will offer Helios racks with purpose built HPE Juniper ethernet switches and optimized software for high bandwidth scale up networking and in January Lenovo announced plans to offer Helios racks. Mi 430X adoption also grew in the quarter with new exascale class supercomputers announced by Gensi in France and HLRS in Germany.
Looking further ahead, development of our next generation Mi500 series is well underway. Mi500 is powered by our CDNA 6 architecture built on advanced 2 nanometer process technology and features high speed HBM4E memory. We are on track to launch Mi500 in 2027 and expect Mi500 to deliver another major leap in AI performance to power the next wave of large scale multimodal mod. In summary, our AI business is accelerating with the launch of my 400 series and Helios representing a major inflection point for the business as we deliver leadership, performance and TCO at the chip, compute tray and rack level.
Based on the strength of our EPIC and instinct roadmaps, we are well positioned to grow data center segment revenue by more than 60% annually over the next three to five years and scale our AI business to tens of billions in annual revenue in 2027. Turning to client and gaming segment revenue increased 37% year over year to 3.9 billion. In client, our PC processor business performed exceptionally well. Revenue increased 34% year over year to a record 3.1 billion driven by increased demand for multiple generations of Ryzen desktop and mobile CPUs. Desktop CPU sales set a record for the fourth consecutive quarter.
Ryzen CPUs topped the bestseller lists at major global retailers and E tailers throughout the holiday period with strong demand across all price points in every region driving record desktop channel sellout in mobile. Strong demand for AMD powered notebooks drove record rise in PC sell through in the quarter. That momentum extended into commercial PCs where Ryzen adoption accelerated as we established a new long term growth engine for our client business. Sell through of Ryzen CPUs for commercial notebooks and desktops grew by more than 40% year over year in the fourth quarter and we closed large wins with major telecom, financial services, aerospace, automotive, energy and technology customers.
At CES. We expanded our Ryzen portfolio with CPUs that further extend our performance leadership. Our new Ryzen AI 400 mobile processors deliver significantly faster content creation and multitasking performance than the competition. Notebooks Powered by Ryzen AI 400 are already available with the broadest lineup of AMD based consumer and commercial AI PCs set to launch throughout the year. We also introduced our Ryzen AI Halo platform, the world’s smallest AI development system featuring our highest end Ryzen AI Max processor with 128 gigabytes of unified memory that can run models with up to 200 billion parameters locally. In gaming revenue increased 50% year over year to 843 million.
Semicustom sales increased year over year and declined sequentially as expected for 2026. We expect semi custom SOC annual revenue to decline by a significant double digit percentage as we enter the seventh year of what has been a very strong console cycle from a product standpoint. Valve is on track to begin shipping its AMD powered Steam Machine early this year and development of Microsoft’s next gen Xbox featuring an AMD Semi Custom SoC is progressing well to support a launch in 2027. Gaming GPU revenue also increased year over year with higher channel sellout driven by demand throughout the holiday sales period for our latest generation Radeon RX 9000 series GPUs.
We also launched FSR4 Redstone in the quarter, our most advanced AI powered upscaling technology delivering higher image quality and smoother frame rates for gamers. Turning to our embedded segment, revenue increased 3% year over year to $950 million led by strength with test and measurement and aerospace customers and growing adoption of our embedded x86 CPUs. Channel sell through accelerated in the quarter as end customer demand improved across several end markets led by test measurement and emulation. Design win momentum remains one of the clearest indicators of long term growth for our embedded business and we deliver year.
We closed 17 billion in design wins in 2025 up nearly 20% year over year as we’ve now won more than $50 billion of embedded designs since acquiring Xilinx. We also strengthened our embedded portfolio in the quarter. We began production of our Versal AI Edge Gen 2 SoCs for low latency inference workloads and started shipping our highest end Spartan Ultra Scale plus devices for cost optimized application. We also launched new embedded CPUs including our EPYC 2005 series for network security and industrial edge applications, Ryzen P100 series for in vehicle infotainment and industrial Systems and Ryzen x100 series for physical AI and autonomous platforms.
In summary, 2025 was an excellent year for AMD, marking the start of a new growth trajectory for the company. We are entering a multi year demand super cycle for high performance in AI computing that is creating significant growth opportunities across each of our businesses. AMD is well positioned to capture that growth with highly differentiated products, a proven execution engine, deep customer partnerships and significant operational scale. And as AI reshapes the compute landscape, we have the breadth of solutions and partnerships required for end to end leadership, from Helios in the cloud for at scale training and inference to an expanded Instinct portfolio for sovereign supercomputing and enterprise AI deployment.
At the same time, Demand for Epyc CPUs is surging as agentic and emerging AI workloads require high performance CPUs to power head nodes and run parallel tasks alongside GPUs and at the edge and in PCs where AI adoption is just beginning, our industry leading Ryzen and Embedded processors are powering real time on device AI. As a result, we expect significant top line and bottom line growth in 2026 led by increased adoption of EPYC and Instinct, continued client share gains and a return to growth in our embedded segment.
Looking further ahead, we see a clear path to achieve the ambitious targets we laid out at our Financial Analyst Day last November, including growing revenue at greater than 35% CAGR over the next three to five years, significantly expanding operating margins and generating annual EPS of more than $20 in the strategic time frame driven by growth in all of our segments and the rapid scaling of our data center AI business. Now I’ll turn the call over to Gene to provide additional color on our fourth quarter results and full year results. Gene
Jean Hu — Executive Vice President, Chief Financial Officer and Treasurer
Thank you Lisa and good afternoon everyone. I’ll start with the review of our financial results and then provide our current outlook for the first quarter of fiscal 2026. AMD executed very well in 2025, delivering record revenue of $34.6 billion, up 34% year over year driven by 32% growth in our Data center segment and the 51% growth in our client and the gaming segment. Gross margin was 52% and we delivered record earnings per share of $4.17 up 26% year over year while continuing to invest aggressively in AI and the data center to support our long term growth.
For the fourth quarter of 2025, revenue was a record 10.3 billion, growing 34% year over year driven by strong growth in the data center and client gaming segments, including approximately 390 million in revenue from MI3.08 sales to China, which was not included in our fourth quarter guidance. Revenue was up 11% sequentially primarily driven by continued strong growth in data center from both server and data center AI business as well as a return to year over year growth in the embedded segment. Gross margin was 57% up 290 basis point year over year. We benefited from the release of 360 million in previously writing down MI308 inventory reserves.
Excluding the inventory reserve release and the MI308 revenue from China, gross margin would have been approximately 55% up 80 basis points year over year driven by favorable product mix. Operating expenses were 3 billion, an increase of 42% year over year as we continue to invest in R and D, go to market activities to support our AI roadmap and long term growth opportunities as well as higher employee performance based incentives. Operating income was a record 2.9 billion representing a 28% operating margin, tax interest and other resulting a net expense of approximately 335 million for the fourth quarter.
Diluted earning per share was a record $1.53 and increase of 40% year over year reflecting strong execution and operating leverage in our business model. Now turning to our reportable segment starting with the data center segment revenue was a record of 5.4 billion up 39% year over year and 24% sequentially driven by strong demand for EPYC processors and the continued ramp of Mi350 products. Data center segment operating income was 1.8 billion or 33% of revenue compared to 1.2 billion or 30% a year ago reflecting higher revenue and the inventory reserve release partially offset by continued investment to support our AI hardware and software roadmaps.
Client gaming segment revenue was 3.9 billion up 37% year over year driven primarily by strong demand for our leadership AMD Ryzen processors. On a sequential basis revenue was down 3% due to lower semicustomer revenue decline on the business. Revenue was record 3.1 billion up 34% year over year and 13% sequentially led by strong demand from both the channel and the PCOEMS and the continued market share gains. The gaming business revenue was 843 million up 50% year over year, primarily driven by higher semi customer revenue and strong demand for AMD Radeon GPUs. Sequentially gaming revenue was down 35% due to lower semicustomer sales.
Client gaming Segment operating income was 725 million or 18% of revenue compared to 496 million or 17% a year ago. Embedded segment revenue was 950 million up 3% year over year and 11% sequentially as demand strengthened across several end market. Embedded Segment operating income was 357 million or 38% of revenue compared to 362 million or 39% a year ago before I review the balance sheet and cash flow. As a reminder, we closed the sale of ZTE System manufacturing business to Semina in late October. The fourth quarter financial results of the ZTE manufacturing business are reported separately in our financial statement as discontinued operations and are excluded from our non GAAP financials.
Turning to the balance sheet and cash flow during the quarter we generated a record $2.3 billion in cash from continuing operations and a record of $2.1 billion in free cash flow. Inventory increases sequentially by approximately 607 million to $7.9 billion to support strong data center demand at the end of the quarter, cash, cash equivalents and short term investment were 10.6 billion. For the year we repurchased 12.4 million shares and returned 1.3 billion to shareholders. We ended the year with 9.4 billion authorization remaining under our share repurchase program. Now Turning to our first quarter 2026 outlook, we expect revenue to be approximately 9.8 billion plus or minus 300 million including approximately 100 million of MI308 sales to China.
At the middle point of our guidance, revenue is expected to be up 32% year over year driven by strong growth in our data center and client and gaming segments and the modest growth in our embedded segment. Sequentially we expect revenue to be down approximately 5% driven by seasonal decline in our client, gaming and embedded segment partially offset by growth in our data center segment. In addition, we expect fourth quarter non GAAP gross margin to be approximately 55% non GAAP operating expense to be approximately 3.05 billion non GAAP other net income to be approximately 35 million, non GAAP effective tax rate to be 13% and diluted.
The share count is expected to be approximately 1.65 billion shares. In closing, 2025 was an outstanding year for AMD reflecting disciplined execution across the business to deliver strong revenue growth, increase profitability and cash generation while investing aggressively in AI and innovation to support our long term growth strategy. Looking ahead, we are very well positioned for continued strong top line revenue growth and earnings expansion in 2026 with a focus on driving data center AI growth, operating leverage and delivering long term value to shareholders. With that, I’ll turn it back to Matt for the Q and A session.
Matthew D. Ramsay — Head, Investor Relations
Yes, thank you very much Gene Operator, please go ahead and open the Q and A session. Thank you.
Questions and Answers:
operator
Thank you Matt. We will now be conducting the question and answer session. If you would like to ask a question, please press Star one on your telephone keypad. A confirmation tone will indicate that your line is in the queue, you may press Star two to remove yourself from the queue. For participants using speaker equipment, it may be necessary to pick up your handset before pressing the star keys. One moment, please. Always. Poll for questions. And the first question comes from the line of Aaron Rakers with Wells Fargo. Please proceed with your question.
Aaron Rakers
Yeah, thanks for taking the question, Lisa. At your analyst day back in November, you seem to kind of endorse, you know, the high 20 billion AI revenue expectation that was out there on the street for 2027. I know today you’re reaffirming the path to strong double digit growth. So I guess my question is can you talk a little bit about what you’ve seen as far as customer engagements, how those might have expanded? I think you’ve alluded to in the past multiple multi gigawatt opportunities. Just, you know, any, you know, just double click on, you know, what you’ve seen for the Mi455 and Helios platform from a demand shaping perspective as we look into the back half of the year.
Lisa Su
Yeah, sure, Aaron, thanks for the question. So first of all, I think the Mi 450 series development is going extremely well. So we’re very happy with the progress that we have. We’re right on track for a second half launch and beginning of production. And as it relates to sort of the shape of the ramp and the customer engagements, I would say the customer engagements continue to proceed very well. We have obviously a very strong relationship with OpenAI and we’re planning that ramp starting in the second half of the year going into 2027. That is on track.
We’re also working closely with a number of other customers who are very interested in ramping MI450 quickly, just given the strength of the product. And we see that across both inference and training and that is the opportunity that we see in front of us. So we feel very good about sort of the data center growth overall for us in 2026 and then certainly going into 2027. You know, we’ve talked about tens of billions of dollars of data center AI revenue and we feel very good about that.
operator
Thank you. The next question comes from the line of Tim Arquoury with ubs. Please proceed with your question.
Timothy Arcuri
Thanks a lot. Jean, I’m wondering if you can maybe give us a little bit of detail under the hood for the March guidance. I know you basically told us that you told us about what Embedded is going to be up a bit year over year. Client sounds like it’s down seasonally, which I take to be maybe down 10. So can you give us a sense maybe of the other pieces and then also can you give us a sense of how data center GPU is going to ramp through the year? I know it’s back half of the year, but I think people are thinking, Lisa, somewhere in the $14 billion range this year, that’s what investors are thinking. I’m not asking you to endorse that, but if you can give us a little flavor for sort of how the ramp will look to the year, that’d be great.
Jean Hu
Hi Tim, thanks for your question. We’re guiding one quarter at a time, but I can give you some color about our Q1 guide. First is right, sequentially we guided a decline around 5%, but data center is actually going to be up. And when you think about this, right, our CPU business seasonal, actually in a regular seasonal pattern, it’s going to be down high single digit. And in our current guide, we actually guide CPU revenue up sequentially very nicely. Also with the data center GPU side that we also feel really good about, you know, GPU revenue including China will be also up. So very nice guide for the data center overall on the client side that we do see seasonality sequentially decline embedded and gaming, they are also have a seasonal decline.
Lisa Su
And maybe, Tim, if I just give you a little bit on the full year commentary. I think the important thing as we look at the full year, we’re very bullish on the year. You know, we’re not, you know, if you look at the key themes, we’re seeing very strong growth in the data center and that’s across, you know, two growth vectors. We see, you know, server CPU growth actually very strong. I mean, we’ve talked about the fact that, you know, CPUs are very important as AI continues to ramp. And we’ve seen the CPU order book continue to strengthen as we go through the last few quarters and especially over the last 60 days.
So we see that as a strong growth driver for us. As Gene said, we see server CPU growing from Q4 into Q1 in what normally is seasonally down and that continues throughout the year. And then on the data center AI side, it’s a very important year for us. It’s really an inflection point. Mi355 has done well and we were pleased with the performance in Q4 and we continue to ramp that in the first half of the year. But as we get into the second half of the year, the Mi 450 is really an inflection point for us so that revenue will start in the third quarter, but it will ramp significant volume in the fourth quarter as we get into 2027. So that gives you a little bit of sort of what the data center ramp looks like throughout the year.
Timothy Arcuri
Thank you, Lisa.
operator
And the next question comes from the line of Vivek Arya with Bank of America. Please proceed.
Vivek Arya
Thank you. First, just a clarification on what you’re assuming for your China Mi308 sales beyond Q1. And then Lisa, specific to 2026, can your data center revenue grow at your target 60% plus growth rate? I realize that that’s a multi year target, but do you think that there are enough drivers, whether it’s on the server, CPU side or GPU side for you to grow at that target base even in 2026? Thank you.
Lisa Su
Yeah, sure, Vivek. So let me talk a little bit about China first because that’s I think important for us to make sure that’s clear. Look, we were pleased to have some Mi308 sales in the fourth quarter. They were actually a license that was approved through work with the administration and those orders were actually from very early in 2025. And so we saw some revenue in Q4 and we’re forecasting for about $100 million of revenue in Q1. We are not forecasting any additional revenue from China just because it’s a very dynamic situation. So given that it’s a dynamic situation, we’re still waiting for, you know, we’ve submitted licenses for the Mi325 and we’re continuing to work with customers and understanding, you know, sort of their customer demand.
We thought it prudent not to forecast any additional revenue other than the $100 million that we called out in the Q1 guide. Now, as it relates to overall data center, you know, as I mentioned in the question to Tim, like, we’re very bullish about data center. I think the combination of drivers that we have across our CPU franchise, I mean the EPIC product line, both Turin and Genoa continue to ramp. Well, and in the second half of the year we will be launching Venice, which we believe actually extends our leadership. And the Mi 450 ramp, which is also very significant in the second half of 2026. We’re not obviously guiding specifically by segment, but the long term target of call it greater than 60% is certainly possible in 2026.
Vivek Arya
Thank you, Lisa.
operator
Thank you. And as a reminder, if you’d like to ask a question, please press Star one. We ask that you limit yourself to one question and One follow up. Thank you. The next question comes from the line of C.J. muse with Kantor. Please proceed.
C.J. Muse
Yeah, good afternoon. Thanks for taking the question. I’m curious on the server CPU side of the and given the dramatic tightness, curious, you know, your ability to source incremental capacity from TSMC and elsewhere and I guess how long will it take for that to see wafers out and how should we think about the implications for kind of the growth trajectory throughout all of calendar 26? And I guess as part of that, if you could speak to how we should be thinking about inflection in pricing as well, that would be very helpful.
Lisa Su
Sure. C.J. so a couple of points about the server CPU market. First of all, we think the overall server CPU TAM is going to grow, let’s call it strong double digits in 2026. Just given the, as we said, the relationship between CPU demand and overall AI ramp. So I think that’s a positive relative to our ability to support that. We’ve been seeing this trend for the last couple of quarters. So we have increased our supply capacity capability for server CPUs and that’s one of the reasons we’re able to increase our Q1 guide as it relates to the server business.
And we see the ability to continue to grow that throughout the year. There’s no question that the demand continues to be strong. And so we’re working with our supply chain partners to increase supply as well as. But from what we see today, I think the overall server situation is strong and we are increasing supply to address that.
operator
Cj, do you have a follow up question?
C.J. Muse
I do maybe for Gene, if you could kind of touch on gross margins through the year and as you balance kind of strengthening service CPU with perhaps greater GPU accelerating the second half, is there kind of a framework that we should be working off of? Thanks so much.
Jean Hu
Yeah, thank you for the question. We are very pleased with our gross margin Q4 performance and the Q1 guide at 55% which actually 130 basis point up year over year while we continue to ramp our Mi355 year over year, very significant. Secondly, I think we are benefiting from a very favorable product mix across all our business. If you think about in data center, we’re ramping our new product, new generation product turing and the Mi 355 which helps the gross margin in clients, we continue to move up the stack and also gaining momentum in our commercial business.
Outline on the business gross margin has been improved, improving nicely. In addition, certainly we see the recovery of our embedded Business, which is also margin accretive. So all those tailwinds we are seeing, we continue to see in next few quarters. And when Mi450 ramp, of course in Q4, our gross margin will be driven largely by mix. And I think we’ll give you more color when we get there. But overall we feel really good about our gross margin progression this year.
operator
Thank you. The next question comes from the line of Joe Moore with Morgan Stanley. Please proceed.
Joseph Moore
Great, thank you. On the Mi455 ramp, will 100% of the business be racks? Will there be kind of an eight way server business around that architecture? And then is the revenue recognition when you ship to the rack vendor or is there something to understand about that? Thank you.
Lisa Su
Yes, Joe. So we do have multiple variants of the Mi 450 series, including an eight way GPU form factor. But for 2026, I would say the vast majority of it is going to be rack scale solutions. And yes, we will take revenue when we ship to the rack builder.
Joseph Moore
Okay, great. And then can you talk to any risks that you may have in terms of, you know, once you get silicon out, turning that into racks, any potential issues as you ramp that? I know your competitor had some last year and you said you learned from that. You know, is there anything you’ve done with kind of pre building racks to sort of ensure you won’t have those issues? Just any, any risk that we need to understand around that?
Lisa Su
Yeah, I mean, I think, Joe, the main thing is the development’s going really well. We’re right on track with the Mi 450 series as well as the Helios Rack development. We’ve done a lot of testing already, both at the RAC scale level as well as at the silicon level. So far so good. We are getting, let’s call it a lot of input from our customers on things to test so that we can do a lot of testing in parallel. And our expectation is that we will be on track for our second half launch.
operator
Thank you. Our next question comes from the line of Stacy Rasgan with research. Please proceed.
Stacy Rasgon
Hi guys. Thanks for taking my questions first one, Lisa, I just wanted to ask about opex. Like every quarter you guys are guiding it up and then it’s coming in even higher and then you’re guiding it up again. And I understand, given the growth trajectory that you need to invest, but how should we think about the ramp of that OPEX and that spending number, especially as the GPU revenue starts to inflect, do we get leverage on that or should we be expecting the OPEX to be growing even more materially as the AI revenue starts to ramp.
Lisa Su
Yeah, sure, Stacey, thanks for the question. Look, I think in terms of OpEx, we’re at a point where we have very high conviction in the roadmap that we have. And so in 2025, as the revenue increased, you know, we did lean in on opex and I think it was for all the right reasons. As we get into 2026 and as we see some of the significant growth that we’re expecting, we should absolutely see leverage. And you know, the way to think about it is, you know, we’ve always said in our long term model that OPEX should grow slower than revenue.
And we would expect that in 2026 as well, especially as we get into the second half of the year and we see, you know, inflation reflection in the revenue. But at this point, I think if you look at our free cash flow generation and the overall revenue growth, I think the investment in OPEX is absolutely the right thing to do.
Stacy Rasgon
Thank you for my follow up. I actually have two sort of one line answers I’m looking for. First, the $100 million in China revenue in Q1, does that also drop through a zero cost basis like we had in Q4 and is that a margin headwind? And number two, I know you don’t give us the AI number, but could you just give us the annual like 2025 instinct number now that we’re through the year? Like how big was it?
Jean Hu
So Stacy, let me answer your first question on the 100 million revenue in Q1. Actually the inventory reserve reversed in Q4, which was 360 million, not only associated with the Q4 for revenue, China revenue, but also covers the 100 million revenue we expect to ship in Q1 to China with our Mi 308. So the Q1 gross margin guide is a very clean guide.
Lisa Su
And Stacy, for your second question, as you know, we don’t guide at the business level, but to help you with your models, I think you can, if you look at the Q4 data center AI number, even if you were to back out the China number, which was, you know, let’s call it not a recurring number, you would still see growth. You’ll see growth from Q3 to Q4. So that should help you a little bit with your modeling.
operator
Thank you. And the next question comes from the line of Joshua Buchalter with TD Cowan. Please proceed.
Joshua Buchalter
Hey guys, thanks for taking my question. I wanted to ask about clients. So the segment beat pretty handily in the fourth quarter and recognize you guys have been gaining share with rising. But I think given what we’ve been seeing in the memory market, there’s a lot of concern about inflationary costs and the potential for pull ins. Were there any changes in your order patterns during the quarter? And maybe bigger picture, how are you thinking about client growth and the health of that market into 2026?
Lisa Su
Yeah, thanks for the question, Josh. The client market has performed extremely well for us throughout 2025. Very strong growth for us both in terms of ASP mixing up the stack as well as just unit growth going into 2026. We are certainly watching the development of the business. I think the PC market is an important market based on everything that we see today. We’re probably seeing the PC TAM down a bit just given some of the inflationary pressures of the commodities pricing, including memory.
The way we are modeling the year is, let’s call it second half, a bit sub seasonal to first half. Just given everything that we see even in that environment with the PC market down, we believe we can grow our PC business and our focus areas are enterprise. That’s a place where we’re making very nice progress in 2025 and we expect that into 2026 and just continuing to grow sort of at the premium higher end of the market.
Joshua Buchalter
Thank you for the color there. Then I wanted to ask about the Instinct family. So we’ve seen your big GPU competitor make a deal with an SRAM based spatial architecture provider and then OpenAI is reportedly been linked to one as well. Could you speak to the competitive implications of that? You know, you’ve done well in inferencing I think partly because of your leadership in HBM content. So I was wondering if you could maybe address the poll seemingly motivated by, you know, lower latency inference and how Instinct is positioned to service this, if you’re indeed seeing it as well.
Thank you.
Lisa Su
Yeah, I think, Josh, it’s really, I think the evolution that you might expect as the AI market matures. You know what we’re seeing is as inference ramps, the really the tokens per dollar or the efficiency of the inference stack becomes more and more important. As you know, with our triplet architecture we have a lot of ability to optimize across inference training and even across sort of the different stages of inference as well. So I think I view this as very much as you go into the future you will see more workload, optimized products and you can do that with GPUs as well as with other more ASIC like architectures. I think we have the full compute stack to do all of those things. And from that standpoint, we’re going to continue to lean into inference as we view that as a significant opportunity for us in addition to ramping our training capabilities.
operator
Thank you. And the next question comes from the line of Ben Reitzis with Melius Research. Please proceed.
Ben Reitzes
Yeah, hey, thanks. Appreciate it. Hey Lisa, I wanted to ask you about OpenAI. You know, I’m sure a lot of the volatility, you know, out there is not lost on you. Is everything on track for the second half for starting the six gigawatts and the three and a half year timeline as far as you know, and is there any other color that you’d just like to give on that relationship? And then I have a follow up. Thank you.
Lisa Su
Yeah, I mean, I think, Ben, what I would say is we’re very, very much working in partnership with OpenAI as well as our CSP partners to deliver on Mi450 series and deliver on the ramp. The ramp is on schedule to start in the second half of the year. Mi 460, 450 is doing great. Telios is doing well. We are in, let’s call it, deep co development across all of those parties. And as we look forward, I think we are optimistic about the Mi450 ramp for OpenAI. But I also want to remind everyone that we have a broad set of customers that are very excited about Mi 450 series. And so in addition to the work that we’re doing with OpenAI, there are a number of customers that we’re working to ramp in that timeframe as well.
Ben Reitzes
All right, I appreciate that. And I wanted to shift to the server CPU and just talk about x86 versus ARM. There’s some view out there that x86 has particular edge in agents big picture. Do you agree with that and what are you seeing from customers and in particular, obviously your big competitor is going to be selling an ARM CPU separately now in the second half. So if there’s just anything on that competitive dynamic versus ARM and what Nvidia is doing and your views on that, that’d be great to hear.
Lisa Su
Thanks, Ben. What I would say about the CPU market is there is a great need for high performance CPUs right now and that goes towards agentic workloads where when you have these AI processes or AI agents that are spinning off a lot of work in an enterprise, they’re actually going to a lot of traditional CPU tasks and the vast majority of them are on x86 today. I think the beauty of EPIC is that we’ve optimized, we’ve done workload optimization. So we have the best cloud processor out there. We have the best enterprise processor. We also have some lower cost variants for storage and other elements.
And I think all of that comes into play as we think about the entirety of the AI infrastructure that needs to be put in place. I think the CPUs are going to continue to be as important as a piece of the AI infrastructure ramp. And that’s one of the things that we mentioned at our analyst day back in November, really this multi year CPU cycle. And we continue to see that. I think we’ve optimized EPIC to satisfy all of those workloads and we’re going to continue to work with our customers to expand our EPIC footprint.
operator
And the next question comes from the line of Tom o’ Malley with Barclays. Please proceed.
Thomas O’Malley
Hey Lisa, how are you? I just wanted to ask you mentioned on memory earlier as a sticking point in terms of inflationary cost. Different customers do this in different ways, different suppliers do this in different ways. But can you maybe talk about your procurement of memory when that takes place, particularly on the HBM side? Is that something that gets done a year in advance, six months in advance? Different accelerator guys have talked about different timelines. We’d be curious to kind of hear when you do the procurement.
Lisa Su
Yeah, I mean given the lead times for things like HBM and wafers and these parts of the supply chain, we’re working closely with our suppliers over a multi year timeframe in terms of what we see in demand, how we ramp, how we ensure that our development is very closely tied together. So I feel very good about our supply chain capabilities. We have been planning for this ramp. So independent of the current market conditions, we’ve been planning for a significant ramp in our both CPU as well as our GPU business over the past couple of years. And so from that standpoint I think we’re well positioned to grow substantially in 2017 and now we are also doing multi year agreements that extend beyond that given the tightness of the supply chain.
Thomas O’Malley
Thanks. Just as a follow up, you have seen a variety of different things in the industry in terms of system accelerators. So kvcache offload more discrete ASIC style compute cpx if you look at what your competitors are doing and you look at your first generation of system architecture coming out, maybe spend some time on, do you see yourself following in the footsteps of some of these different types of architectural changes? Do you think that you will go in a different direction. Anything just on the evolution of your system based architecture and then the adjoining products and or silicon within. Thank you.
Lisa Su
I think Tom, what we have is the ability with a very flexible architecture, with our triplet architecture, then we also have a flexible platform architecture that allows us to really have different system solutions for the different requirements. I think we’re very cognizant that there will be different solutions. So there’s no, I’ve often said there’s no one size fits all and I’ll say that again, there’s no one size fits all. But that being the case, it’s clear that the RAC scale architecture is very, very good for the highest end applications when you’re talking about inference, distributed inference and training. But we also see an opportunity with enterprise AI to use some of these other form factors and so we’re investing across that spectrum.
operator
And the next question comes from the line of Ross Seymour with Deutsche Bank. Please proceed.
Ross Seymore
Hi, thanks for letting me ask a couple questions. I guess my first question is back on the gross margin side of things. As you go from the Mi 300 to the 400 to the 500 eventually, do you see any changes in the gross margin throughout that period? In the past you’ve talked about optimizing dollars more so than percentages. But just on the percentage side, does it go up down or is there volatility as you go from one to the next for any reason? Just wondered on the trajectory there.
Jean Hu
Ross, thank you for the question. At a very high level each generation we actually provide much more capabilities, more memory, help our customers. So in general the gross margin should progress issue generation when you offer more capabilities to your customers. But typically when you first ramp at the beginning of ramp over generation, it tends to be lower when you get to the scale, get to the yield improvement, the test improvement and also overall performance improvement that you will see gross margin improvement improving within each generation. So it’s kind of a dynamic growth margin but in the longer term you should expect each generation should have a higher gross margin.
Ross Seymore
Thanks for that Gene. And then one small segment of your business, but it seems quite volatile and you talked a little bit about further off than you usually do is the gaming side of things. What is the magnitude down you’re talking about this year? Because in 2025 you thought it was going to be flat and it ended up growing 50% which, which was a nice positive surprise. But now that you’re talking about this year being down, but then the next gen Xbox ramping in 2027 I just hope to get some color on what you see as kind of the annual trajectory there.
Jean Hu
Yeah, so Lisa can add them all. So 2026, actually it’s the seventh year of current product cycle. Typically when you’re at this stage of the cycle, revenue tend to come down. We do expect the revenue on the semi customer revenue side to come down significantly double digit for 2026 as Lisa mentioned in her prepared remarks for the next generation.
Lisa Su
Yeah, I think we’ll certainly talk about that going forward. But as we ramp the new generation, you would expect a reversal of that.
Matthew D. Ramsay
Operator, I think we have time for one more caller on the call, please. Thank you.
operator
Our final question comes from the line of Jim Schneider with Goldman Sachs. Please proceed.
Jim Schneider
Good afternoon. Thanks for taking my question. Relative to the ramp of your record level systems, would you expect any kind of bottleneck in terms of supply constraints in terms of the ramp as you ramp the second half of the year to potentially impact or limit the revenue growth? In other words, maybe talk about whether you expect supply to really kind of mute the growth in Q4 sequentially relative to, sorry, Q3 relative to Q4?
Lisa Su
Yeah. Jim, we are planning this at every component level. So I think relative to our data center AI ramp, I do not believe that we will be supply limited in terms of the ramp that we put in place. I think we have an aggressive ramp. I think it’s a very doable ramp. And as we think about the size and scale of amd, clearly our priority is ensuring that the data center ramps go very well. And that’s both on the data center AI, the GPU side as well as on the CPU side.
Jim Schneider
Thank you. Maybe as a follow up to the earlier question on opex, could you maybe address what are some of the largest investment areas you made in 2025 and then what are the largest incremental OPEX investment areas for 2016? Thank you.
Jean Hu
Yeah, Jim, on the 2025 investment, the priority and the investment largest investment in data center AI, our hardware roadmap, we accelerated that roadmap. We expand our software capabilities. We also acquired ZT Systems which added significant system level solutions and the capabilities those are the primary investment in 2025. We also invest heavily in go to market to really expand our go to market capabilities to support revenue growth and also expand our commercial business and enterprise business for our CPO franchise in 2026. You should expect us to continue to invest aggressively. But as Lisa mentioned earlier, we do expect revenue to expand faster than operating expense increase to drive the earnings per share expansion.
Matthew D. Ramsay
Thank you, everybody, for participating on the call. Operator. I think we can go ahead and close the call now. Thank you. Good evening.
operator
Thank you, ladies and gentlemen. That does conclude the question and answer session. And that also concludes today’s teleconference. You may disconnect your lines at this time and have a great rest of the day.
Leave a Reply
You must be logged in to post a comment.