X

Cadence Design Systems, Inc (CDNS) Q4 2025 Earnings Call Transcript

Cadence Design Systems, Inc (NASDAQ: CDNS) Q4 2025 Earnings Call dated Feb. 17, 2026

Corporate Participants:

Richard GuVice President, Investor Relations

Anirudh DevganPresident & Chief Executive Officer

John WallSenior Vice President & Chief Financial Officer

Analysts:

Vivek AryaAnalyst

Joe VruwinkAnalyst

Joe QuatrochiAnalyst

Jim SchneiderAnalyst

Gary MobleyAnalyst

Charles ShiAnalyst

Siti PanigrahiAnalyst

Lee SimpsonAnalyst

Jason CelinoAnalyst

Jay VleeschhouwerAnalyst

Gianmarco ContiAnalyst

Ruben RoyAnalyst

Andrew DeGasperiAnalyst

Kelsey ChiaAnalyst

Joshua TiltonAnalyst

Nay Soe NaingAnalyst

Presentation:

Operator

Ladies and gentlemen, good afternoon. My name is Abby, and I’ll be your conference operator today. At this time, I would like to welcome everyone to the Cadence Fourth Quarter and Fiscal Year 2025 Earnings Conference Call. [Operator Instructions] Thank you.

And I will now turn the call over to Richard Gu, Vice President of Investor Relations for Cadence. Please go ahead.

Richard GuVice President, Investor Relations

Thank you, operator. I would like to welcome everyone to our fourth quarter of 2025 earnings conference call.

I’m joined today by Anirudh Devgan, President and Chief Executive Officer; and John Wall, Senior Vice President and Chief Financial Officer. The webcast of this call and a copy of today’s prepared remarks will be available on our website, cadence.com.

Today’s discussion will contain forward-looking statements, including our outlook on future business and operating results. Due to risks and uncertainties, actual results may differ materially from those projected or implied in today’s discussion. For information on factors that could cause actual results to differ, please refer to our SEC filings, including our most recent Forms 10-K and 10-Q, CFO commentary and today’s earnings release. All forward-looking statements during this call are based on estimates and information available to us as of today, and we disclaim any obligation to update them.

In addition, all financial measures discussed on this call are non-GAAP, unless otherwise specified. The non-GAAP measures should not be considered in isolation from or as a substitute for GAAP results. Reconciliations of GAAP to non-GAAP measures are included in today’s earnings release.

For the Q&A session today, we would ask that you observe a limit of one question only. If time permits, you can requeue with additional questions.

Now I’ll turn the call over to Anirudh.

Anirudh DevganPresident & Chief Executive Officer

Thank you, Richard. Good afternoon, everyone, and thank you for joining us today.

I’m pleased to report that Cadence delivered excellent results for the fourth quarter, closing an outstanding 2025 with 14% revenue growth and 45% operating margin for the year. We finished 2025 with a record backlog of $7.8 billion, well ahead of plan, reflecting broad-based portfolio strength and increasing contributions from our AI solutions. I would like to emphasize the essential nature of Cadence’s engineering software. As I have stated previously, our platform is best viewed as a 3-layer cake framework, accelerated compute being the base layer, principal simulation and optimization as the critical middle layer and AI as the top layer to drive intelligent exploration and generation. This holistic approach ensures that our AI solutions are not just fast, but physically accurate and grounded in scientific truth.

Building on this foundation, we are deploying Agentic AI workflows powered by intelligent agents that autonomously call our underlying tools. AI flows act as a force multiplier, enabling our customers to significantly expand design exploration and accelerate time to market, while driving increased product usage and deeper engagement across our entire platform. We see growing momentum on both AI for design and design for AI fronts. On AI for design, our Cadence AI portfolio continues to gain traction with market-shaping customers. Last week, we launched ChipStack AI Super Agent, the world’s first Agentic AI solution for automating chip design and verification. It is built upon our proven physically accurate product and provides up to 10x productivity improvement for various tasks, including design coding, generating test benches and debugging.

ChipStack has received compelling endorsements from Qualcomm, NVIDIA, Altera and Tenstorrent, among others. Our other AI products such as Cadence Cerebrus, Verisium and Allegro X AI are proliferating at scale. And our LLM-based design agents powered by JedAI data platform are delivering impressive results. On design for AI, the infrastructure AI phase is in full swing with AI architectures growing in scale and complexity. Customers are increasingly standardizing on Cadence’s full flows to address their performance, power and time-to-market challenges.

We continue to closely collaborate with market leaders on their next-generation AI designs spanning training, inference and scaling. We deepened our long-standing partnership with Broadcom through a strategic collaboration to develop pioneering Agentic AI workflows to help design Broadcom’s next-generation products. We also expanded our footprint at multiple marquee hyperscalers across our EDA, hardware, IP and system software solutions. And we are particularly excited by the emerging physical AI opportunity, and our broad-based portfolio uniquely positions us to enable autonomous driving and robotic companies to address multi-modal silicon and system challenges.

In addition, we are increasingly applying AI internally to improve efficiency across engineering, go-to-market and operations. In 2025, we also furthered our partnerships with leading foundries. We expanded our collaboration with TSMC to power next-gen AI flows on TSMC’s N2 and A16 technologies. We strengthened our engagement with Intel Foundry by officially joining the Intel Foundry Accelerator Design Services Alliance. Rapidus made a wide-ranging commitment to our core EDA software portfolio across digital, custom analog and verification solutions. And Samsung Foundry expanded its collaboration with Cadence, leveraging our AI-driven design solutions and IP solutions.

Now turning to product highlights for Q4 and 2025. Accelerating compute demand driven by the AI infrastructure build-out and demanding next-generation data center requirements continue to create significant opportunities for our core EDA portfolio. Our core EDA business delivered strong performance with revenue growing 13% in 2025. Our recurring software business reaccelerated to double-digit growth in Q4, a testament to the strength and durability of our model. Our hardware business delivered another record year with over 30 new customers and substantially higher repeat demand from AI and hyperscalers. 7 out of the top 10 customers in 2025 were Dynamic Duo customers, underscoring the differentiated value provided by our hardware systems. With a strong backlog entering 2026, we expect this year to be yet another record year for hardware.

Our digital portfolio delivered a strong year, driven by continued proliferation of our full flow solutions as we added 25 new digital full flow logos in 2025. We expanded our footprint at a top hyperscaler, growing our AI-driven synthesis and implementation solutions, including our 3D-IC platforms. A marquee hyperscaler embraced the Cadence digital full flow for its first full customer-owned tooling AI chip tape-out. Broad proliferation of Cadence Cerebrus continues and adoption of our Cadence Cerebrus AI Studio is accelerating. Recently, Samsung US used it to tape out a SF2 design, achieving 4x productivity improvement. In custom and analog, our Spectre circuit simulator saw significant growth at leading AI and memory companies.

Our flagship Virtuoso Studio, the industry standard for custom and mixed-signal design saw continued traction in AI-driven design migration across its vast installed base. A top multi-national electronics and EV customer reported a 30% layout efficiency gain using our AI-driven design migration. Our IP business saw strong momentum with revenue growing nearly 25% in 2025, reflecting both the strength of our expanding IP portfolio and the critical role our STAR IP solutions play in the AI, HPC and automotive verticals. We achieved both significant expansions and meaningful competitive wins at marquee customers, demonstrating the superior performance and capabilities of our IP solutions across HBM, UCIe, PCIe, DDR and SerDes titles. We are seeing particularly strong adoption of our industry-leading memory IP solutions, including our groundbreaking LPDDR6 memory IP, which is enabling customers to achieve the memory performance and efficiency required for next-generation AI workloads.

In Q4, we launched our Tensilica HiFi IQ DSP, offering up to 8x higher AI performance and more than 25% energy savings for automotive infotainment, smartphone and home entertainment markets. Our System Design and Analysis business delivered 13% revenue growth in 2025. Earlier in the year, we introduced the new Millennium M2000 AI supercomputer featuring NVIDIA Blackwell, which is ramping nicely and with growing customer interest across multiple end markets. Our 3D-IC platform has become a key enabler for the industry’s transition to multi-chip architectures, which are increasingly critical for next-generation AI infrastructure, HPC and advanced mobile applications.

Adoption of our AI-driven Allegro X platform is accelerating. Earlier in Q3, Infineon standardized on Allegro X and in Q4, STMicroelectronics decided to adopt our Allegro X solution to design printed circuit boards. Our reality data center digital twin solution continued its strong momentum and was deployed at several leading hyperscalers and marquee AI companies. BETA CAE continues to unlock tremendous opportunities, particularly in the automotive segment. With our previously announced acquisition of Hexagon’s D&E business, we’ll be poised to accelerate our strategy around physical AI, including in autonomous vehicles and robotics.

In closing, I’m pleased with our strong performance in 2025, and I’m excited about the strong momentum across our business. As the AI era continues to accelerate, our AI-driven EDA, SDA and IP portfolio, powered by new AI agents and accelerated computing positions Cadence extremely well to capture these massive opportunities.

Now I will turn it over to John to provide more details on the Q4 results and our 2026 outlook.

John WallSenior Vice President & Chief Financial Officer

Thanks, Anirudh, and good afternoon, everyone.

I’m pleased to report that Cadence delivered an excellent finish to 2025 with broad-based momentum across all our businesses. Robust design activity and strong customer demand drove 14% revenue growth and 20% EPS growth for the year. Productivity improvement across the company helped us achieve an operating margin of 44.6% for the year. Fourth quarter bookings were exceptionally strong, and we began 2026 with a record backlog of $7.8 billion.

Here are some of the financial highlights from the fourth quarter and the year, starting with the P&L. Total revenue was $1.440 billion for the quarter and $5.297 billion for the year. GAAP operating margin was 32.2% for the quarter and 28.2% for the year. Non-GAAP operating margin was 45.8% for the quarter and 44.6% for the year. GAAP EPS was $1.42 for the quarter and $4.06 for the year. Non-GAAP EPS was $1.99 for the quarter and $7.14 for the year.

Next, turning to the balance sheet and cash flow. Our cash balance was $3.01 billion at year-end, while the principal value of debt outstanding was $2.5 billion. Operating cash flow was $553 million in the fourth quarter and $1.729 billion for the full year. DSOs were 64 days, and we used $925 million to repurchase Cadence shares during the year.

Before I provide our outlook for 2026, I’d like to share that it contains our usual assumption that export control regulations that exist today remain substantially similar for the remainder of the year. And our current 2026 outlook does not include our pending acquisition of Hexagon’s design and engineering business.

For our outlook for 2026, we expect revenue in the range of $5.9 billion to $6 billion, GAAP operating margin in the range of 31.75% to 32.75%, non-GAAP operating margin in the range of 44.75% to 45.75%; GAAP EPS in the range of $4.95 to $5.05, non-GAAP EPS in the range of $8.05 to $8.15, operating cash flow of approximately $2 billion, and we expect to use approximately 50% of our free cash flow to repurchase Cadence shares in 2026.

For Q1, we expect revenue in the range of $1.420 billion to $1.460 billion. GAAP operating margin in the range of 30% to 31% non-GAAP operating margin in the range of 44% to 45%; GAAP EPS in the range of $1.16 to $1.22 and non-GAAP EPS in the range of $1.89 to $1.95. And as usual, we published a CFO commentary document on our Investor Relations website, which includes our outlook for additional items as well as further analysis and GAAP to non-GAAP reconciliations.

In conclusion, I am pleased that we delivered strong top line and earnings growth for 2025, and we finished the year with a record backlog and ongoing business momentum, setting ourselves up for a great 2026. As always, I’d like to thank our customers, partners and our employees for their continued support.

And with that, operator, we will now take questions.

Questions and Answers:

Operator

Thank you. [Operator Instructions] And our first question comes from the line of Vivek Arya with Bank of America Securities. Your line is open.

Vivek Arya

Thanks for taking my question. Anirudh, I’m curious, have you seen any disruption or change of thinking whatsoever at your customers in terms of them using AI to reduce or eliminate demand for EDA or IP or any other computer-aided engineering tools. Is there a scenario at all that you have discussed, right, or your customers might contemplate where they can use more of their internal tools or AI to displace what you’re doing right now? Thank you.

Anirudh Devgan

Yes. Vivek, thank you for the question. I know this is a topical question on top of mind for investors. But like I said before, I mean, for us, we always look things as a 3-layer cake. And there’s different kinds of software. There’s a lot of discussion in terms of will AI replace some form of software. But you know well anyway, there are different kind of software. Our software is engineering software, you’re doing very, very complex physics-based mathematical operations. So any AI tools that we are developing or our customers are using basically in the end, call our software to get the job done properly. So what we are saying instead is that — and you can see that in our results, we can see this in our discussion with customers is there is — as we move to these Agentic flows, it uses more of our software to get the job done than the other way around.

So as we — even like our own super agent, which is ChipStack, it is doing a part of the flow, first of all, that was not automated. Even in regular AI, there is a lot of automation in coding. That’s one of the big applications. But if you move that over to chip design, if you look at our flow, there is an equivalent of coding, which is RTL code, which describes the chip or the system. But that part has been mostly manual. And then after that, our tools kick in to optimize the RTL to simulate, verify the RTL. So what we are doing with our AI flows, the top layer is we are adding extra tools that will automate the writing of RTL, but then still, it calls a lot of middle layer tools, a lot of the base tools to implement and verify that.

And I’ve said before, like what we are seeing at our customers, they want to use more AI. And I think they will invest more in R&D. I think they will also hire more engineers. But as a percentage of spend, the more spend will go to automation and compute because the other thing which is unique to our end market is that the workload is exponential. If the chip goes from $100 million now to $1 trillion in a few years, they need to do a lot more work and then some of the work will be done by AI agents calling our base tool. So overall, to answer your question, we have seen absolutely no discussion with customers of reducing the usage. On the contrary, all these AI tools are increasing the usage of our tools. And of course, then the AI build-out also, as customers design more and more chips, that is also increasing the usage of our tools.

Operator

And our next question comes from the line of Joe Vruwink with Baird. Your line is open.

Joe Vruwink

Great. Thanks. I maybe wanted to ask about how you’re approaching the outlook for 2026. It looks like recurring revenue is set to accelerate, and that’s normally well supported by backlog. Maybe can you talk about the key contributors to the recurring improvement? And then just on the 20% or so of revs that come from upfront sources, you obviously had an incredible 2025 with your hardware platforms and it sounds like you’re expecting growth there again. I think we’re in year 2 of that platform now. Can you kind of see a repeat of what you observed back in 2023. That was a very strong year 2 for the second-gen product. How maybe are you thinking about that product and just where it is in its life cycle?

John Wall

Yes. Thanks for the question, Joe. This is John. As usual, at this time of the year, our guidance will reflect what we believe to be a prudent and well-calibrated view of the year. We finished the year with very strong momentum on backlog, and we saw that strength right across the board across all lines of business. And as Anirudh says, our view of the AI era is that it increases workload faster than headcount grows and Cadence monetizes workload through broad portfolio proliferation across EDA, IP, hardware and SDA. And we’re seeing that flow through into all lines of business for us. Now typically, at this time of the year, our hardware is a pipeline business. We’re expecting a very strong first half for hardware. But because we only typically see 2 quarters in the pipeline, we’re quite prudent in the second half of the year in this current guide, but that’s no different to what we normally do.

Same — we typically try to derisk the guide for things like hardware and China at this time of the year. And if you look at how China has performed in the last 2 years, I think it was 12% of our revenue in 2024, 13% in 2025, and we expect it to be in that kind of range, 12% to 13% of our revenue as well for this year. But yes, we’re seeing absolutely huge strength across the board, delighted with the strength of the guide. And just a key transparency metric you’ll see in the CFO commentary that around 67% of 2026 revenue is coming from beginning backlog. And that gives us strong visibility into the multi-year recurring base. So we’re very, very happy to see that recurring base get back to kind of double digits, kind of low teen growth.

Operator

And our next question comes from the line of Joe Quatrochi with Wells Fargo. Your line is open.

Joe Quatrochi

Yes. Thanks for taking the questions. Just kind of curious, maybe following up on that. On the verification and emulation hardware cycle, any sort of help on just kind of where you think you are at in that cycle? And then is there anything we should think about just in terms of memory availability from that perspective or just anything about margins given pretty significant price increases that we’ve seen across the DRAM spectrum?

Anirudh Devgan

Yes, good question. So hardware, like you know, is in a multi — every year is a record for hardware, and I expect that trend to continue. And the reason being, of course, these hardware systems become indispensable to the design of complex chips and systems. Actually, no complex AI chip or any other mobile or automotive chip, any complex chips are not designed without hardware systems, and we have the best hardware system on the market because we design — just to remind you, we design our own chips made by TSMC, and we sell full racks. These things have trillions of transistors to emulate other chips. So even though it is reported upfront, as you know, because the customers will buy and use these systems for multiple years, the big customers are buying them almost every year, okay? And I don’t see that trend changing.

And like even I indicated like when we launched Z3, even Z2 was a very good system. So the fact that the system is second year now, I think it’s still — it has capacity to design systems of 1 trillion transistors, okay, which will last for several years to go. And in a few years anyway, we’ll launch our next system. So we’re always ahead of what the market will need. But in terms of demand, we don’t see any difference versus — if you ask me this year versus last year, the demand is only stronger, and you can see that in the backlog. And then how much this will grow, we will see. Like John said, beginning of the year, we are a little careful with hardware. But we’ll update you middle of the year depending on how things are going.

But hardware systems are performing well. We are taking share. And actually, what I feel is we are taking share in all our major product segments. So we are taking share in hardware. We are taking share in IP, which is really good to see now. This will be almost third year of strong IP growth. You know us, right? We don’t — 1 year doesn’t make a trend for us. So — but after 3 years, I can see that I feel good about our IP business. Hardware has been strong for a while. EDA, our core business is doing phenomenal, okay? 3D-IC, we are taking share. Agentic AI, we are first to market. We already have a lot of customers using our Agentic AI flow. So not only I feel good about the hardware business and where it is, actually, I feel really good of our overall portfolio and how we are performing.

Operator

And our next question comes from the line of Jim Schneider with Goldman Sachs. Your line is open.

Jim Schneider

Good afternoon. Thanks for taking my question. I was wondering if you could talk about a little bit more about your — your AI workflows. And if it’s possible to quantify any of the benefits that your customers are getting from those workflows today, whether that be time to market, enhanced productivity per seat or so on? And maybe separately kind of address how you’re able to monetize that and how broad that is across your portfolio today? Thank you.

Anirudh Devgan

Yes, Jim. I mean, first of all, the results are quite remarkable with AI. And like a few years ago, there was some skepticism of how much AI can benefit. But now, I mean, this is true in other areas, too. But definitely, in chip design, the results are fantastic and real. And I think there is a difference, I believe, in chip design versus other industries because one of the issues with AI flows is that you really don’t know whether the AI result is correct or not. And this has been one issue even in wipe coding or software, like, okay, generate some code, but you spend a lot of time verifying that it is correct or not.

And in some other industries, there is no like formal languages to design things. But in chip design, first of all, we have formal languages to design things, which is RTL. Plus over the last 20, 30 years, we have built all these products whose job is to make sure that the RTL is correct, okay? So all our middle layer tools, verification, simulation, optimization. So therefore, AI can be a force multiplier and accelerant to chip design versus other areas, okay? And so the way — and the results, just to highlight, like we talked about Samsung getting 4x productivity. This is code from the customer or Altera talking about 7 to 10x productivity improvement. Now they’re on parts of the flow for like RTL writing, which has been kind of manual, there can be massive improvements in productivity.

And in the back end, for example, physical design, there could be 7%, 10% PPA improvement, 12% in that range. So just that you know that when you go from one node to another node, like 5 to 3 or 3-nanometer, 2-nanometer, the gain could be like 10%, 20%. So you’re getting half the gain or almost the same gain as a node migration through better optimization with AI, okay? So I think the results are real. We have demand from almost all customers now to engage rapidly because they want to deploy AI in their R&D function. And you have to remember the way our customers deploy R&D in the — apply AI in their R&D function is through Cadence and Cadence tools, right? So they are all very anxious to try all these things. We have all these engagements with all the top customers.

And our monetization, and I’ve always said in the past that it takes some time for monetization to happen. It takes 2 contract cycles, and I think we are well into that now. So I think we are seeing the monetization now, which is reflected in our results is reflected in our record backlog. And Agentic AI can give further monetization. So the way we go to market with Agentic AI will be different because this is a new tool category of something that EDA never automated. Writing of RTL or test benches was manual, right? So we will price it as like a virtual engineer or agent. So that would be extra business. And our customers are willing to spend on that because it is productivity improvement for them.

And then on top of that, just like before, it will call the base tools and they become a lot more licenses or usage will happen our base tools. And the reason for that is like in the non-AI flow, this is a misnomer that we are like seat count limited. We are exploration limited. Even if a user, like a manual user is running our tool, they will run like 3 or 4 or 5 experiments in parallel to see what is the best PPA. But with the Agentic AI flow, it could run 10 or 100 experiments in parallel. So our plan for monetization, which is working well, we’ll add the Agentic AI part. We will charge for the Agentic flows from a virtual engineer, things like RTL writing and then, of course, for the licenses in the base layer and see how that goes. But from a customer standpoint, I mean, there’s a lot of demand to try all these new tools.

Operator

And our next question comes from the line of Gary Mobley with Loop Capital. Your line is open.

Gary Mobley

Hi guys, let me extend my congratulations on the strong finish to the year. John, I believe there’s been an effort to move your SD&A customers into 1-year license terms. And if I’m not mistaken, that’s been an impediment to growth. So the question is, is that the reason why SD&A revenue grew only 13% in 2025? And what’s the consideration for 2026? And then what’s the consideration for Hexagon when you roll that business? And I believe they were at a $240 million revenue run rate. Does that see a more limited — is that number limited because of this 1-year license term transition?

John Wall

Yes. Thanks for the question, Gary. Yes. And you’re right in terms of SD&A, we lapped some tough comps in SD&A in Q4 2025, partly due to the multi-year business. So we did some multi-year business in Q4 2024 through our BETA subsidiary, and we have deliberately been moving to more annual subscription arrangements for BETA in 2025, and that impacts the year-over-year numbers. In saying all that, we’re very pleased with SD&A’s strategic trajectory and its role in the chip-to-systems thesis. From a mix standpoint, SD&A was like 16% of revenue in ’25, consistent with ’24 when you look at the year, and we expect it to grow. We expect all product groups to grow, but we’re not guiding by segment.

In relation to Hexagon, I think the annualized — I think we’ve said this before at some fireside chats that the annualized revenue for Hexagon is about $200 million on a year basis. Now what that means, of course, that it’s kind of like BETA where BETA did a lot of January 1 deals that — like if that deal closed by the end of Q1, you’re probably looking at $150 million revenue for the year. But we’re not guiding. We don’t have final numbers for anything like that now. But — so we haven’t got anything to do with Hexagon in this guide.

Operator

And our next question comes from the line of Charles Shi with Needham. Your line is open.

Charles Shi

Hi, thanks for taking my question. Anirudh, I thought the best highlight of the quarter was the announcement around the marquee hyperscaler customer adopting Cadence digital full flow. I think you characterized it as for the first COT chip that they’re going to tape out. So it sounds like we should expect that particular hyperscaler having a COT chip coming out in 2 or 3 years down the road. And just kind of want to ask a question like how many hyperscaler customers right now are doing COT and even for that particular customer having the first chip on COT, wonder what’s your — what do you think the ramp is going to be? Like how will they proliferate COT for the other chips they are developing? Because every hyperscaler these days have more than one chip. That’s my understanding. And I just want to get some sense from you where you are in terms of that whole COT proliferation. And I believe this is one of the great stories about the Cadence about EDA in general, but I want to get your sense. Thank you.

Anirudh Devgan

Yes. Thanks for the question, Charles. I mean, without getting into like specifics of a particular customer, but I have said for some time now because we work with our customers confidentially. We share our road map with them. They share road map with us. And we are in a unique position to work with all the leading companies across the globe, right? And so I have said for a while that this trend of — first of all, the trend that the customers, especially these big hyperscalers will do their own chips is even more firm now than 1 or 2 years ago.

And it’s evident now with some of the big hyperscalers, the success they’re having with their own chips, right, especially in the last 6 months, that has become evident because it was not clear like 1 or 2 years ago, people thought people will not design their own chips. It doesn’t mean that the merchant semi will not do well. A merchant semi will do fabulous, but the big customers will design their own chips, okay? And then this is also true that over time, the big customers will do more and more things in-house, starting with ASIC to hybrid COT to COT because these chips are — I mean, this is more — there’s another step these days versus the old days, which is hybrid COT because these chips have multiple chiplets in them. So the customers can do some of the chiplets themselves, some can be outsourced and then they can do all of them themselves. So I think this trend is going to happen.

And the reason we talk about it, it is happening and different customers will do it at different pace. But eventually, I think there will be multiple customers with their own chips. There will be multiple, of course, very significant semi-standard general purpose chips. And almost all of them will, over time, do more and more COT. And like you said, they do multiple chips now, at least 3 major platforms for each hyperscaler. So all this is good for us, good for more EDA consumption at the system companies, more IP being used internally, of course, more hardware, more system tools because they are nature — system companies in nature. So we just want to make sure we are well positioned for that, but the trend is only accelerating of these big companies doing more themselves. And then as you know, this will also then apply to other verticals like automotive and robotics and things like that.

Operator

And our next question comes from the line of Siti Panigrahi with Mizuho. Your line is open.

Siti Panigrahi

Yes. Thanks for taking my question. You talked about robust design activity. Can you give us some color in any kind of improvement on your traditional semi segment versus AI or automobile. If you could give some color, that would be helpful. And Anirudh, on the physical AI side, that was a big focus at CES recently. Have you started seeing any traction in that space? When do you think that will be a significant contributor?

Anirudh Devgan

Yes. Thanks for the question, Siti. On both, I mean, the design activity is accelerating, like I was saying, and that’s true for system companies and semi companies. And actually, I mean, a lot of the projections are that we might hit as the industry, semi might hit $1 trillion this year, which is like it used to be 2030, and we are like 4 years ahead of that. So this is very good news for the industry. And of course, we have deep partnerships with all the major semi players and definitely the AI leaders like with NVIDIA and with Broadcom. Actually, in this prepared remarks also, we highlighted our new collaboration with Broadcom, which are, of course, doing phenomenally well and so is NVIDIA.

And then, of course, all the memory companies are doing phenomenally well. So overall, I think the semi companies, along with system companies are doing great. And I do see, especially in AI and memory, but we do see the general market, I’m sure you follow that, the mixed-signal companies, the regular, let’s call it, the regular semi companies are also, I think, have a better outlook for ’26 than ’25. So it’s good to see a broad-based strength in the semi business, which is about 55% of our business. And that just creates a better environment for us to deploy our new solutions. And they all want to deploy AI like we discussed earlier. And that’s true for both semi and system companies. So overall, I feel that the environment is much more healthier starting ’26 than it was like a year ago.

Operator

And our next question comes from the line of Lee Simpson with Morgan Stanley. Your line is open.

Lee Simpson

Great. Thanks for squeezing me in here. I just wanted to go back to ChipStack, if I could. I mean it seems relatively clear that you see the super agent as something that can transform from Verilog to RTL or the coding thereof at least. And then it would pull in basic layer tools for debug and optimization. So you get a more deterministic outcome for customers. But you teased us a little bit with the idea about where the further monetization would come. It didn’t sound like it would be on a subscription basis. It would be on a sort of value to customer basis. So I wonder if you could maybe just expand a little on that and how that would be monetized? And maybe in particular, whether or not this would be margin accretive. You’re at 45% now already. So could this help kick that on? Thanks.

John Wall

That’s a great question, Lee. If I might jump in here on the monetization side that we don’t see AI forcing a wholesale change from subscriptions to consumption. Our customers still want predictable access to trusted sign-off engines and certified flows. So multi-year subscription remains at the core of our business. What AI does is it changes how much customers run the tools and where value is created. There’s more automation. There’s more iterations, there’s more compute. So we’ll attach more usage-based pricing for incremental capacity and AI-driven optimization. We have card models and token models that handle all those things. And then in a few areas on the services side, we can offer outcome-oriented packages that’s structured around measurable improvements like cycle time, closure productivity with clear scope and governance.

And that’s kind of how we’ve been going to market in recent times. And it’s worked out well for us. And you can see how it’s turning around already our recurring revenue. Now we’ve been prudent in our outlook, and we’re not expecting an uptick in that, but it definitely is — there’s plenty of opportunity for Cadence in AI. But as Anirudh said at the beginning in his opening comments there, that there’s 2 real things that differentiate Cadence. First, we’re engineering software anchored in physics and mathematically rigorous optimization. And that’s not a nice to have. It’s a core truth that our customers require as complexity rises. And then secondly, AI is not replacing our products. It’s amplifying demand and accelerating adoption. And you see that in our results for 2025, and I think you see it in our guide for 2026. Anything to add?

Operator

And our next question comes from the line of Jason Celino with KeyBanc Capital Markets. Your line is open.

Jason Celino

Hey, great. Thank you for taking my question. Looks like IP had a phenomenal year. I know you have a slate of new exciting titles coming out, but I just wanted to ask how that translates to pipeline? Like does it take time to sell these new IP titles? And then with the guide overall, it looks mostly first half weighted. Does your visibility into the IP today look more first half or second half? Thanks.

Anirudh Devgan

IP is doing great. I mean, like I said, we want to see multiple years of performance before we call it out. And starting last year, I started to call it out because we saw like multiple years and good outlook into ’26, which I think should come true. So our starting backlog and everything in IP is strong. And then we are also talking to — I mean, not just our traditional business with TSMC, which is doing phenomenal, but we have opportunity to engage with the newer foundries. So overall, I think IP will be good this year, and we’ll see how it progresses. We’ll keep you updated, but it should be a strong year for IP in ’26.

Operator

And our next question comes from the line of Jay Vleeschhouwer with Griffin Securities. Your line is open.

Jay Vleeschhouwer

Thank you. Good evening, Anirudh. If we think about what’s currently occurring with the AI phenomenon in large EDA historical terms, the last time I would argue that there was a major let’s call it, generational technical and procedural change in the industry was in the early 2000s. And I’d like to ask how this time might be different from that phenomenon in the sense that the last time, it was fairly narrowly based in terms of the number of products that grew or were newly adopted. We saw the very interesting phenomenon where average contract durations actually shrank.

I think, as customers were looking to perhaps mitigate technical risk and wanted to retain some vendor flexibility or optionality, hence, the shorter durations at that time. Would you say that this time around, the adoption phenomenon might last longer than just a few years of the earlier generation I mentioned that there wouldn’t be necessarily an adverse effect on contract durations, perhaps maybe even a lengthening with longer commitments from customers. And maybe talk about how in those big respects, this phenomenon might be broader and more long lasting than what occurred, again, many years ago, but it has some similarities.

Anirudh Devgan

Yes. That’s a great point, Jay. And I mean, we have to see how it unfolds because each time is similar but different. But we are not seeing any change in the duration, so which is good. We don’t want to — but there is always more opportunity to see more and more add-ons like we have mentioned in the past, — now it will affect all parts of the flow like in the 3-layer cake, the top 2 layers will fuse together, AI and our core engines. And I think there is opportunity to add, like I said, add new product categories, especially in the front end, this kind of super agent to write RTL, which — and write — not just write RTL, which this is different from regular kind of wipe coding.

So what is exciting about ChipStack is it’s not just writing RTL, but also writing test benches, writing verification flows because you know that, Jay, anyway, that chip verification is as important as chip design. If you can’t verify, then the thing — because all our customers want things to be first time right. So I think the opportunities of AI and verifications are huge because that’s an NP-complete exponential problem. So I think what is also exciting to me on the Agentic AI new tools is the ability to verify much more accurately.

And then we go from there. I mean, I think I feel good about the strength of the — at this point, I feel good about all the 3 layers of the cake. We have been innovating. We have been first to market in porting our software to new hardware platforms, whether they’re parallel CPUs or GPUs or custom chips. Our base tools are performing remarkably well. We are taking share in almost all segments. And then we are first to market with Agentic AI. So I feel good about the portfolio. I feel good about the engagement. Now how exactly it will unfold, I think it should be more long-lasting, but we’ll — it’s very difficult to predict. So we’ll keep you posted, but so far, so good.

John Wall

Yes. This is John. Just — I mean, we’ve been around a long time in terms of chasing Moore’s Law for the longest time. And we’ve built sales models that generally adapt to aligning price with value while preserving the durability of our recurring revenue model. I think what you can count on us to do is that we won’t undermine customer predictability that subscriptions will remain the anchor in terms of our primary engagement with our customers. And then we won’t take unbounded outcome risk either. Outcomes will be scoped and measurable. And we’ll value — we’ll price on value metrics. Customers can control things like jobs and runs and compute and throughput and things like that. But — so it will be very, very deliberate and thoughtful in terms of how we grow as we always are.

Operator

And our next question comes from the line of Gianmarco Conti with Deutsche Bank. Your line is open.

Gianmarco Conti

Yes. Hi, thank you for squeezing me in, and congrats on a great quarter. I have a long question. Sorry to go back on ChipStack, but could we start by giving some detail about how can we bridge the gap between ChipStack, which we know is about RTL automation and where it evolves versus Cerebrus, which is about implementation with regards to NAND. I guess my question is about whether there could be some cannibalization in the future. And staying on the AI theme, — could we have some information about given where model development is happening in AI, whether you’re seeing more competition, particularly from the startups. I know that present there and whether that’s kind of coming up the pitch clients. And finally, just to pile up, are there any harder constraints when you run more agents given that you’re going to require more compute, especially at higher design scales?

Anirudh Devgan

Yes. Sorry, there’s some noise on the line. So I think I got the gist of the question, but I may not have gotten all the points. So sorry, I apologize in advance. I think your question is also about the front-end agent versus Cerebrus and also start-ups, if I — so first of all, I think Cerebrus super critical. I mean — so I think there will be several kind of AI Agentic flows that will be needed. Now we highlighted ChipStack because it’s kind of new, and it’s a new category of RTL design and verification. But there are at least several agents that we are actively developing. Cerebrus, we also extended the Cerebrus to full flow. So there has to be a front-end design agent like Cerebrus.

There’s a back-end agent for physical implementation because that takes a lot of time right now, and there’s a lot of demand for making the implementation more efficient. And there’s similar principles apply in Cerebrus AI Studio. We do more exploration and the customer gets better results as a result of that. But there will be a lot of activity we will highlight in the future on the back end, on the physical design. So there’s digital design and verification is one area, physical design in another area. Analog, of course, is ripe for. Finally, we have new technology to see if we can automate more and more of analog and migration flows. And then on packaging and system design. So we highlight ChipStack because we’re super excited about it, but that doesn’t mean that all the other — there are 4 or 5 big agentic flows that we are developing.

On the start-ups, we always watch all the start-ups. We have a history of also acquiring them if they are good, but more in the earlier stages like we did with ChipStack. I think that was the best AI start-up out there. And we are very confident in our own R&D. We have like 10,000 people, the best R&D team in computational software. Half of them have advanced degrees. We have 3,000 people with customer support engineers. We’re regularly meeting with customers — with big customers in a given week, we’ll have multiple R&D meetings with their R&D. So we keep track of what the customer wants. We have massive investment in R&D. And typically, I think the start-ups are successful in areas we don’t focus in or if you want to enter in new areas. But in terms of AI, we are completely focused. And we always use start-up as an accelerant if need to, but we will have massive investment in this space in all the major domains that our customers want.

Operator

And our next question comes from the line of Ruben Roy with Stifel. Your line is open.

Ruben Roy

Yes. Thank you, Anirudh. You answered bits and pieces of what I’m about to ask, but I was hoping to put together a question on SD&A and just to understand sort of the longer-term strategy. It seems like some companies, enterprises, industrials otherwise are maybe thinking about pulling some simulation workloads in-house or partnering with the AI infrastructure ecosystem. We’ve seen Synopsys and NVIDIA talk about targeting Omniverse digital twins for that type of thing. How should investors think about your strategy? Is it sort of a neutral strategy and you’ll work with accelerated compute providers, et cetera, and their tools? Or are you trying to build sort of an ecosystem that’s Cadence specific? I’m just trying to understand kind of longer-term strategy and thinking around SD&A. Thank you.

Anirudh Devgan

Yes. Thank you for the question. So in SD&A, like there are 2 critical areas for us. So one is 3D-IC and all the innovation that’s happening, both at the packet level analysis. And then the other is physical AI, physical simulation like for planes and cars and robots and drones. And that’s one of the big reasons to acquire BETA and then Hexagon. But we are focused on building the core engines, okay? And the core engines will work with the accelerated compute, like we have — we have done GPU joint work with Jensen in India for years. And we were the first to port all our soft solvers to kind of accelerate compute platform because the physical simulation word just is — a lot of the simulation and physical like cars and planes and robots kind of CFD and structural simulation.

And I’ve said this before, is naturally without getting too technical, is naturally matrix multiply, okay? And GPUs and NVIDIA is exceptional in that because AI at its core is matrix multiply. So it’s a good fit. And then we work with Omniverse and all. But that is not in — Omniverse is a great platform, but when they actually run Omniverse, they will run our tools through that. So this is another way to go to market. And then also directly with customers. So we are neutral to that, but Omniverse is a great platform to deploy our products and NVIDIA has highlighted that with several of our customers. But our goal is to build the basic — we are an engineering software company. We build the basic solvers that can solve the most difficult problems, combine them with AI, combine them with compute and deploy it to all platforms. So I feel good about our position that way.

Operator

And our next question comes from the line of Andrew DeGasperi with BNP Paribas. Your line is open.

Andrew DeGasperi

Thanks for fitting me in. I just had a question. You mentioned several times in the prepared remarks about taking share across the board. And I was just curious, is this kind of a change relative to previous quarters? And is it focused in any particular area? And are you surprised by this relative to what you’ve seen in the past? Thank you.

Anirudh Devgan

Yes. I think our competitive position has improved. So we are noticing that and calling that out. And definitely in hardware, given the uniqueness of our platforms in IP. And I mean a lot of it, you can see it in the results as well. Our growth is much higher than the market. So IP is doing well. Hardware is doing well, EDA, 3D-IC, and we are holding, of course, our traditionally good position in analog and gaining in digital and verification. So I feel very good about — we are technology-centric, R&D-centric company first. And I think all those investments are paying off with customers adopting more of our flows.

Operator

And our next question comes from the line of Kelsey Chia with Citi. Your line is open.

Kelsey Chia

Hi Anirudh and John, congrats on the great results. I’d like to dive a little on China. So John, you mentioned that you contemplated a more prudent guidance from China. China revenue grew 18% last year, outpacing corporate average and also well above your initial guidance heading into 2025. How should we think about the sustainability of this strength? And also, what are the assumptions you have embedded in that guidance?

John Wall

Yes. So look, as we said earlier that the assumptions embedded in the guidance is that we saw 12% of revenue coming from China in 2024 and 13% in 2025, and we expect it will be in a similar range, 12% to 13% for 2026. But what we’ve seen in China is design activity remains very, very strong, and we’re seeing strong bookings growth in the region. But visibility is — visibility in the pipeline is near term in the first half of the year. So the second half of the year, there’s probably more prudence in the second half of this year’s guide for China than there would be in the first half because we have more visibility in the first half. Anything to add on design activity in China?

Anirudh Devgan

Design activity is good in China. And I think it has stabilized. I mean we had mentioned this last year also, second half had stabilized. And I think it continues to be strong. I mean, China is all the trends that are in the US are also in China, a lot of AI chips, a lot of physical AI is even stronger with cars and autonomous driving, EVs. So it’s good to see China doing well.

Operator

And our next question comes from the line of Joshua Tilton with Wolfe Research. Your line is open.

Joshua Tilton

Hey guys, thanks for sneaking me in, and I will echo my congratulations on a strong quarter. I kind of have a high-level one. I know a lot of times we focus on like what the 3-year CAGR has been. And I think on this call, Andrew mentioned that semis companies now represent or still represent, I think, from my understanding, about 55% of the business. So my question is, how do we think about growth over the next 3 years as the mix of semis and systems levels out and what feels like the mix of upfront and recurring levels out at what I’m assuming is kind of more sustainable levels than the shifts you’ve seen over the last few years?

Anirudh Devgan

Yes. I think we are super excited about the system companies doing more silicon. And there have been some questions in the past. And like I had said before, I think this is irreversible and accelerating trend, okay? And of course, we gave several examples this time. And especially because of AI, the system companies will do a lot. And then with physical AI, they will do even more. Now that number, 55-45, first of all, moves very, very slowly because the semi companies are doing well, too.

I mean we are growing at a record pace, but both of them are growing. So semi companies, okay, what NVIDIA has done, of course, is phenomenal. What is happening with Broadcom is phenomenal. And then Qualcomm, MediaTek, there are so many semi companies are doing phenomenally well. So the ratio, I think more and more system companies will contribute more, but it doesn’t move as fast as you would think, which is a good thing because the semi companies are also growing rapidly. And of course, semi companies will have an essential role in the build-out of AI, which is driving all this growth. So that’s what I would like to say.

John Wall

Yes. And Josh, the — I think I mentioned before, we expect the recurring revenue mix to remain around 80% in fiscal ’26, and that’s consistent with 2025. And when we say that we have a prudent guide for 2026, I think there’s as much upside in our recurring revenue side of the business as there is in the upfront side. Strategically, we like the balance. Recurring provides durability, upfront reflects areas where customer demand is accelerating, and we have differentiated assets. But we’re seeing strength right across the board. And I think that’s why Anirudh is talking about share gains.

Operator

And our final question comes from the line of Nay Soe Naing with Berenberg. Your line is open.

Nay Soe Naing

Thank you for taking my question. Maybe one for John. I mean you mentioned about leveraging AI internally. And I was wondering how we should think about that in our models how should we think about your incremental margins going forward? I think with your ’26 guide, what you’re implying is incremental margins of about 51%, which is slightly below the rate that you’ve been trending in the last recent or last few years as well. So I just wanted to triangulate with the internal AI leverage and how you’re guiding for margin for ’26 and how we should think about margin a bit longer term in the age of AI? Thank you.

John Wall

Yes. Thanks for the question. I think if you have a look at what we achieved in 2025, we achieved incremental margin of 59%, I think. And I think that points to the fact that there’s no near-term ceiling on operating leverage for the company. I mean the company has performed at about 45% operating margin. So there’s a lot of upside to that incremental margin of 59% that we achieved in 2025. Now generally, we’re more prudent with our guide at the start of the year, and we try to build from there. But — so I think if you compare the right compare for the 51% that’s in the current guide is probably against what we would guide for incremental margin at the start of each year. But — and I think it’s one of the strongest guides that we’ve ever had.

And then in relation to your commentary about AI and our use of that internally, that’s absolutely right. That’s what Anirudh is talking about for years now that it’s designed for AI and AI for design internally at Cadence, we learn a huge amount from our own internal group in terms of how AI is used. But — and if you like, I mean, we’ve — we’ve built a great business around emulating hardware and a lot of our AI usage is like emulating engineering flows that — and we take advantage of those, and they’re helping us to get more value out of the R&D investments that we’re making. But we expect to do the same as our customers in that when you have access to more engineering capability and being able to do things faster and leverage AI, we’ll probably do more R&D and it will be more people, more AI, not less people.

Operator

And I will now turn the call back to Anirudh Devgan for closing remarks.

Anirudh Devgan

Thank you all for joining us this afternoon. It’s an exciting time for Cadence as we begin 2026 with product leadership and strong business momentum. Our continued execution of the intelligent system design strategy, customer-first mindset and our high-performance culture are driving accelerated growth. Great Place to Work and Fortune Magazine recognized Cadence as one of the Fortune’s 100 Best Companies to Work for in 2025, ranking it number 11. And on behalf of our employees and our Board of Directors, we thank our customers, partners and investors for their continued trust and confidence in Cadence.

Operator

[Operator Closing Remarks]

Newsdesk: