Rambus Inc (RMBS) Q1 2026 Earnings Call Transcript

Note: This is a preliminary transcript and may contain inaccuracies. It will be updated with a final, fully-reviewed version soon.

Rambus Inc (NASDAQ: RMBS) Q1 2026 Earnings Call dated Apr. 27, 2026

Corporate Participants:

John AllenInterim Chief Financial Officer

Luc SeraphinChief Executive Officer

Analysts:

Kevin GarriganAnalyst

Tristan GerraAnalyst

Aaron RakersAnalyst

Gary MobleyAnalyst

Sebastien Cyrus NajiAnalyst

Kevin CassidyAnalyst

Bastien Faucon-MorinAnalyst

Unidentified Participant

Presentation:

Operator

Welcome to the Rambus first quarter fiscal year 2026 earnings conference call. At this time, all participants are in a listen only mode. At the conclusion of our prepared remarks, we will conduct a question and answer session. If you would like to ask a question, you may press Star one on your touchtone phone at any time. If anyone should require assistance during the conference, please press Star zero at any time. As a reminder, this conference call is being recorded. I would now like to turn the conference over to John Allen, Interim Chief Financial Officer.

You may begin your conference.

John AllenInterim Chief Financial Officer

Thank you Operator and welcome to the Rambus first quarter 2026 results conference call. I am John Allen, Interim Chief Financial Officer at Rambus and on the call with me today is Luke Sarafin, our CEO. The press release for the results that we will be discussing today has been filed with the SEC on Form 8K. We are webcasting this call along with the slides that we will reference during portions of today’s call. A replay of this call can be accessed on our website beginning today at 5:00pm Pacific Time.

Our discussion today will contain forward looking statements including including our expectations regarding projected financial results, financial prospects, market growth, demand for our solutions, other market factors including reflections of the geopolitical and macroeconomic environment and the effects of ASC606 and reported revenue, among other items. These statements are subject to risks and uncertainties that may be discussed during this call and and are more fully described in the documents we file with the SEC, including our 8Ks, 10Qs and 10Ks.

These forward looking statements may differ materially from our actual results and we are under no obligation to update these statements. In an effort to provide greater clarity in the financials, we are using both GAAP and non GAAP financial presentations in both our press release and on this call. A reconciliation of these non GAAP financials to the most directly comparable GAAP measures has been included in our press release, in our slide presentation and on our website@rambas.com on the investor Relations page under Financial Releases.

In addition, we will continue to provide operational metrics such as licensing billings to give our investors better insight into our operational performance. The order of our call today will be as follows. Luke will start with an overview of the business, I will discuss our financial results and then we will end with Q and A. I will now turn the call over to Luke to provide an overview of the quarter. Luke

Luc SeraphinChief Executive Officer

Good afternoon everyone and thank you for joining us. We opened 2026 with a strong first quarter, meeting our financial targets and broadening our portfolio with to address the accelerating demands of AI. The quarter reflects solid momentum as we execute against our roadmap to support long term profitable growth for the company. This is an exciting time for Rambus and we are well positioned to capitalize on the market trends in the data center and AI. For decades we have developed foundational technologies and solutions across a wide range of memory and interconnects.

That heritage positions us well as systems become more diverse, memory dependent and performance driven to give more context. There are several market and technology trends playing out across the data center and AI that continue to work in our favor as AI adoption accelerates and inference use cases expand, workloads are becoming more persistent and context rich and performance is increasingly defined by how efficiently data can be stored, accessed, moved and secured to support these workloads. AI infrastructure is becoming more complex and heterogeneous, combining a mix of traditional and AI server platforms to support orchestration, data management and real time execution at scale.

At the same time, the expansion of inference and particularly agentic AI with continuous reasoning and multi step workflows is driving more always on activity and and placing even greater demands on memory capacity, bandwidth latency and power efficiency. Together these trends are driving new memory and connectivity architectures to support purpose built solutions across a wider range of use cases and form factors. This increases our opportunities for richer chip content and broader adoption of our industry leading ip, reinforcing our position for sustainable long term growth.

Now let me turn to our quarterly results starting with our chip business. Our performance reflects strong execution and ongoing leadership in our core DDR5 outfitted chips. We delivered product revenue of $88 million in Q1 in line with our guidance and up 15% year over year. Looking ahead, we expect to deliver double digit product revenue growth in the second quarter. We continue to see increasing customer adoption of new products and and remain well positioned to support the ramp of next generation platforms as they enter the market.

We continue to execute on our strategy of delivering comprehensive industry leading chip solutions to address growing customer and market requirements. As I mentioned in my opening remarks, we recently expanded our product portfolio with the introduction of Our chipset for JedX standard LPDDR 5X SoCam 2 modules building on the same signal and power integrity expertise we have applied across multiple generations of DDR. This chipset is the first offering in our roadmap of LPDDR based server module solutions and includes new voltage regulators as well as the SPD hub support reliable power efficient server class operation.

As part of that roadmap we are actively working with industry partners on the definition and development of LPDDR 6 based SOCAN 2 solutions which would offer a natural upgrade path for future generation AI platforms. As AI server architecture is diversified to address varying performance, power efficiency and form factor requirements, some platforms are now leveraging LPDDR based memory. While LP memory offers attractive power characteristics, it was originally designed for mobile environments with very short signal paths and tight power margins, making reliable deployment in server systems inherently challenging.

The SoCam2 addresses these limitations through a compact CPU proximate module architecture with optimized signal routing and localized power management to enable LPDDR modules to operate in server environments. The Rhombus SoCam 2 chipset enables power efficient, reliable operation of up to 9.6 gigabit per second in a compact module form factor. As LP based server modules scale to higher speeds and bandwidth in future generations, they will require increasingly sophisticated interface power and control functionality.

This progression is similar to what we have seen in DDR based server modules and reinforces our opportunity to extend our roadmap of high value chip content across memory types in the future. As I mentioned previously, the ongoing expansion of AI is driving demand for a broader range of memory types and form factor. To meet these needs, we continue to build on our leadership solutions in DDR5 including chipsets for RDIM and MRDIM and selectively expand our roadmap of novel solutions as they begin to play a complementary role in heterogeneous systems.

With active engagement across customers and ecosystem partners, we are helping shape next generation server modules, reinforcing the opportunity for richer chip content and sustained growth. Turning now to Silicon IP, we saw strong customer traction in the first quarter with continued design wins at Tier 1 companies and growing engagement across our portfolio. We remain focused on delivering industry leading premium IP that enables differentiated solutions for AI in the data center including accelerators and networking chips across a wide range of architectures.

There’s increasing momentum for custom silicon in AI, especially among hyperscalers as they tailor hardware to their own software stacks and deployment needs, optimizing for performance, power efficiency and total cost at scale. This is driving an accelerating pace of design and expanding demand for value added IP to support memory bandwidth, advanced connectivity and security. During the quarter we saw growing traction for Our value added PCIe retirement switch IP to support increasingly complex AI systems across scale up and scale out environments.

We also expanded our memory IP portfolio with the introduction of the industry’s fastest HBM4E controller, setting a new benchmark for AI accelerator memory throughput. In addition, we launched a new network security engine and designed for Ultra Ethernet to protect distributed AI clusters. All of these IP offerings are in great demand and further strengthen our position as a critical enabler of next generation compute and connectivity solutions for AI infrastructure. In summary, we executed well in the first quarter.

We delivered solid results and expanded our offerings for both chips and IP to extend our leadership in our core markets. As we look ahead, Rhombus is well positioned to capitalize on the megatrends in data center and AI. Our sustained technology leadership, disciplined execution and increasing traction across our portfolio of leadership products will continue to fuel our results. With that, we expect strong growth in 2026 and I’m confident in our long term trajectory. As always, I want to thank our customers, partners and employees for for their continued trust and support.

Now I’ll turn the call over to John to walk through the financials. John

John AllenInterim Chief Financial Officer

Thank you Luke. I’d like to begin with a summary of our financial results for the first quarter. On Slide 3, we delivered first quarter revenue and earnings in line with our guidance with solid contributions from each of our diversified businesses. We also continued our strong track record of cash generation. This performance reflects the continued strength in our business model. Our strong balance sheet and disciplined capital allocation enable us to invest in growth initiatives while returning value to shareholders.

Let me now provide you a summary of our non GAAP income statement on Slide 5. Revenue for the first quarter was $180.2 million, which was in line with our expectations. Royalty revenue was $69.6 million while licensing billings were 70.8 million. The difference between licensing billings and royalty revenue mainly relates to timing as we do not always recognize revenue in the same quarter as we bill our customers. Product revenue was 88 million representing 15% year over year growth driven by continued strength in DDR5 products and ramping new project contributions.

Contract and other revenue was 22.6 million consisting predominantly of silicon IP. As a reminder, only a portion of our silicon IP revenue is reflected in contract and other revenue and the remaining portion is reported in royalty revenue as well as in licensing billings. Total operating costs including cost of goods sold for the quarter were $104.6 million. Operating expenses of 69.9 million were up sequentially due to seasonal payroll related taxes in connection with equity vesting interest and other income for the quarter was $6.9 million using an assumed flat tax rate of 16% for non GAAP pre tax income.

Non GAAP net income for the quarter was $69.3 million. Now let me turn to the balance sheet details on slide 6. We ended the quarter with cash cash equivalents and marketable securities totaling $786 million, up $24 million from Q4 2025 with strong operating cash of $83 million, partially offset by $38 million in taxes paid on equity vesting and 17 million in capital expenditures. We increased our inventory balance by $14 million during the quarter and expect to continue building inventory strategically in the second quarter.

Our strong balance sheet gives us the flexibility to increase inventory to support our product revenue growth and manage through potential supply chain constraints. First quarter depreciation expense was $8.5 million. Free cash flow in the quarter was 66.3 million. Let me now review our non GAAP outlook for the second quarter on Slide 7. As a reminder, the forward looking guidance reflects our best estimates at this time and our actual results could differ materially from what I’m about to review.

In addition to the non GAAP financial outlook under ASC606, we also provide information on licensing billings, which is an operational metric that reflects amounts invoiced to our licensing customers during the period adjusted for certain differences. We expect revenue in the second quarter to be between 192 and $198 million. We expect product revenue to be between 95 and 101 million, a sequential increase of 11%. At the midpoint of guidance, we expect royalty revenue to be between 72 and 78 million and licensing billings between 76 and 82 million.

We expect Q2 non GAAP total operating costs, which includes cost of sales to be between 114 and $110 million. We expect Q2 capital expenditures to be approximately 14 million. Non GAAP operating results for the second quarter are expected to be between a profit of 78 and and $88 million. For non GAAP interest and other income and expense, we expect $7 million of interest income. We expect non GAAP tax expenses to be between 13.6 and 15.2 million. In Q2 we expect Q2 share count to be 110 million diluted shares outstanding.

Overall, we anticipate Q2 non GAAP earnings per share to range between 65 and 73 cents. Let me finish with a summary on slide 8. In closing, we delivered solid results in line with our objectives driving ongoing profitability and cash generation. Our diversified portfolio remains a core strength with each of the businesses contributing meaningfully to our performance. Our patent licensing business continues to deliver consistent, predictable performance supported by the long term agreements we have in place.

Our Silicon IP business is well positioned, driven by critical interconnect and security technologies addressing the accelerating demand for AI solutions, our product business grew 15% year over year and is poised for sequential growth in the second quarter. We remain focused on delivering long term shareholder value with year over year revenue growth in 2026. Before I open the call up to Q and A, I would like to thank our employees for their continued teamwork and execution. With that, I’ll turn the call back to our operator to begin Q and A.

Could we have our first question?

Questions and Answers:

Operator

Thank you. Ladies and gentlemen. If you have a question, please press Star one on your touchtone phone. Your first question comes from the line of Kevin Gerrigan with Jefferies. Please go ahead.

Kevin Garrigan

Yeah, hey team, thanks for taking my questions. Can you just help us think about your, your product revenue into the June quarter. So you know, last quarter you discussed the low double digit revenue impact from the one time OSAT issue. And I think, you know, we may have been expecting a larger sequential increase for June, just kind of given the strong, how strong demand has been. So you just walk us through the drivers for the June quarter product revenue and you know, why the COVID the recovery might be a little bit more measured.

Luc Seraphin

Thank you, Kevin. Yes, sure. So the first thing I would say is that, you know, the, the issue that we had talked about in the prior call is, is behind us. Everything has been resolved and it’s a question now for us to re. Stabilize the, the, the supply chain which we are doing and we see a normalization of that supply chain. So you know, it is behind us. And the revenue for Q2 is guided at 11% over Q1. So that’s the right trajectory and we continue to expect to grow sequentially after that in an environment where our footprint continues to be very strong.

You know, I mentioned in earlier call that it was older generation of, of, of DDR5. The market is transitioning from Gen 2 to Gen 3, which is a good catalyst for us. I would say, you know, we, we met or we guide you know, to, to, to double digit in the second quarter we met, you know what we said we would meet on the operational strain in Q1 and we would continue to grow sequentially quarters after that. We don’t see any issue with the demand and we don’t see any more issues with the quality issue that we had in Q1.

So we feel quite confident for the rest of the year as the market moves from Gen 2 to Gen 3.

Kevin Garrigan

Okay, great. And then just as a follow up on your LPDDR 5 SoCam 2 server module chipset, when would you expect to start seeing revenue from this chipset? And what kind of milestones should we watch to gauge traction?

Luc Seraphin

You know, I would see this as having a very good strategic impact at this point in time. The financial impact in the short run this year is going to be very minimal just because the volumes are very small for this type of solutions. You know, as a reminder, it only addresses a very small portion of the workloads. The volumes are small, the content is small as well, but strategically so I wouldn’t put it in the model for 2026, but it’s strategically very, very important because there is a trend to look at LPDDR in the server environment in the long run.

LPDDR still has issues to address the server requirements, but it also has fraction, it has benefits. So we see this as a stepping stone for us. It builds on the fact that, you know, over the last few years we have developed our product line as chipsets. So we have the whole chipset for the SoCam 2. We have our own teams for power management development. And these are the two new chips that we are proposing for this, you know, for this solution. So we see this as a stepping stone. It allows us to engage with us with other, you know, AI players in the industry and we’re working on next generation as well.

But I don’t think that the financial impact is going to be significant this year just given the volumes.

Kevin Garrigan

Okay, great. I appreciate the call, Luke,

Luc Seraphin

Thank you.

Operator

Your next question comes from the line of Tristan Guerra with Baird. Please go ahead.

Tristan Gerra

Hi, good afternoon. A quarter ago you highlighted shortages and sounded a little bit, maybe not cautious, but muted on the growth opportunity. And you provided a fairly muted data center unit forecast. How are shortages for component potentially impacting your revenue this year? What are you seeing, you know, that’s different now than a quarter ago? And given the outlook for Drum to remain very tight next year, you know, how should we look at your product revenue growth and specifically your RCD growth with excluding, you know, the new product layers that will be adding on to that from a year over year growth standpoint.

So in other words, would you expect, you know, the same type of growth next year, year over year versus this year? And I understand you’re not guiding for next year, but just wanted to get a bit more color on what you see on the market that potentially could put constraint on your growth and clearly that’s an issue for a lot of other companies as well.

Luc Seraphin

Yeah. Thank you, Tristan. First of all, let me say a few words about the demand. You know, we do see demands continue to grow for standard servers, which is you know, which is good for us. You know, with, with agency in particular, we expect the server market to grow faster this year than last year. We model it at, you know, low, low double digit growth because, you know, despite the excitement around AI, there’s also a large portion of the server markets and that is not AI related. But we do see demand growing on the service side, which is really a good catalyst for us.

But as we said last quarter, we’re watching the situation with supply, especially on the back end. Certainly since last quarter, the situation has not improved. We’re working with our suppliers, but the lead times are long and there is tension on the back end. So we take this into account when we forecast, when we forecast our business. This is one factor, you know, the other factor that, that, that affects or that that comes into play when we forecast is the timing of launch of new platforms in the markets.

And as you, as you know, it’s been the case in the past for us, you know, the launch of our new products depends on the launch of new platforms in the market and that’s the dependency that we have. So we don’t see the situation as materially different than what we saw in Q1, but from a supply standpoint, things have not improved and we expect the supply situation to be tight going into 2027 as well when we talk to industry players.

Tristan Gerra

Okay, that’s useful. And then as my follow up question, any additional color on the MRDEM opportunity? I know you’ve talked in the past about some very initial ship late this year, specifically within francing. Any, any additional color as to, you know, where it could be in terms of revenue in 27? I think you’ve talked in the past about your expectation that you probably fully realized that 600 million Tam for Mrs. By 28. So you know, what should we be looking at for next year kind of in between and what’s really driving that?

What’s going to be driving the demand? Is it going to be mostly influencing and additional color you may have, you know, beyond what you’ve said in the past on, you know, customer interest, you know, for this technology and where it’s going to ramp.

Luc Seraphin

Thank you, Tristan. First, we, you know, we continue to make progress in the launch of these products and the interaction with our customers on this mrd, we’re excited by the opportunity for the reasons we’ve always talked about, larger capacity, larger bandwidth in the same ecosystem, so the adoption is easier. The main, I would say, factor affecting the ramp of our MRD is going to be the, the Timing of the launch of the platforms from intel and AMD in particular, where they do have this capability attached, you know, in the next generation platform.

So we continue to see the ramp starting in 2027 in earnest. And Sam, at this point in time, which we still value at about $600 million, you know, as I keep saying, you know, the SAM, once the products are in the market, you know, and we get feedback and the market gives us feedback, we, we’re going to have a much better view of that, Sam. But at this point in time, this is the right number to keep in mind.

Tristan Gerra

Great. Thanks again.

Luc Seraphin

Thanks. Kristen.

Operator

Your next question comes from the line of Aaron Rakers with Wells Fargo. Please go ahead.

Aaron Rakers

Yeah, thanks for taking the questions. I guess kind of just building off that last question first, you know, when you kind of think about the 600 million, you know, incremental opportunity around Mr. Dimms, I can appreciate that there’s a lot of unknown variables at this point. But I’m just curious, as you rolled up that expectation, what assumption are you making in terms of attach rate on AMD Venice and Diamond Rapids at this point and how might that evolve? I would assume that you’re being rather conservative on that attach rate at this point.

Then also on that, how do you see CXL starting to play out?

Luc Seraphin

You know, at this point in time we model, you know, a low attached rate as I said, you know, until my experience is, until a product is in the market, it’s hard to make those models, you know, more, more, more significant. There are a lot of variables coming into play as we just said. The most important one is the timing of rollout of these platforms in the market. There’s also, you know, the whole situation with, you know, DRAM pricing and the, you know, the prices of modules and how, you know, our customers, customers are going to make the decisions between the combination of modules they want to have in the current, you know, memory cycle environment.

So we model a, I would say a conservative percentage, you know, for MLB at this point in time. But you know, ramp will start when the platforms ramp in the market and that’s when we’re going to have a better view.

Aaron Rakers

Any thoughts on csl?

Luc Seraphin

Oh, sorry, I missed the second part of your question. Sorry, Aaron. Csl, you know, we do have very good traction on our IP business. We are not planning to, you know, launch a semiconductor product at this point in time. You know, we, we do have this on our shelves if you wish, as we designed one a couple of years ago. But we do see the, with with, with agentic AI, we do see demand for, you know, standard dims and mardims, you know, as, as, as as being the main benefactors of that. And that’s where we will continue to focus our attention.

Aaron Rakers

And then final one, one final quick one. When you guys talk about the opportunity to grow sequentially in the product revenue, you know, into the back half of the calendar year, I’m curious if you were asked about seasonality in the second half versus first half, if there’s anything that changes your views maybe relative to the last couple of years on, you know, I think you’ve seen some decent growth. Second half versus first half. Thank you.

Luc Seraphin

Yes, thanks Aaron. That’s a good observation. We actually do see the second half shaping out slightly different than the first half. Better growth in the second half. A lot of times it had to do with the launch of new platforms. They typically hit the market if they are on time in the second half of the year and that’s where you have more products there. But even if you look at the first half of this year at the midpoint of our guidance for Q2 and you look at the first half of last year, you know, we’re still growing, you know, close to 18%.

So, you know, the first half, despite our issue in Q1, is still much higher than the first half of last year. And we believe the second half is going to show growth. We do see some seasonality and typically our second half is stronger than our first half.

Aaron Rakers

Thank you.

Luc Seraphin

Thank you. Erin.

Operator

Your next question comes from the line of Gary Mobley with Loop Capital. Please go ahead.

Gary Mobley

Good afternoon gentlemen. Thanks for taking my question. If I take the sum of your license billing in your contract and other revenue in the first half of this year for the results in the guide, and compare that to the same period last year, looks like you’re generating some abnormal, abnormally strong growth. Is that due to any sort of variance in the patent licensing or should I take this to mean that your silicon IP business might actually be running north of $150 million annually right now?

Luc Seraphin

So. Thanks Gary. We can see some quarter to quarter variations in these two categories. Just for the nature of the business, I would say that underlying this we see very good traction on our silicon IP business. Actually, AI has an impact on our silicon IP business which is also very positive as people who developed custom solutions for AI looking for new interfaces and new security solutions like the ones I mentioned in the, in the prepared remarks. So we do have very good traction on the silicon IP business and we continue to expect this business to grow 10 to 15% a year based on that.

Our other business, patent licensing business, it can also be changing from quarter to quarter. You know, we do renew agreements on a regular basis and sometimes these agreements are structured in different ways depending on the customers and their, and what they want to do. So we have some strong quarters, some quarters that are not, not too good. But on average, you know, this business continues to be stable at 200, $220 million. So I would say I would not, you know, pay too much attention on the quarterly split, you know, on these revenues.

But the fundamentals are really, really good. What I would add to this is if you look at our patent licensing business, our silicon IP business, or our product business, they all benefit from, you know, what’s happening in the memory subsystem area. You know, they all benefit from AI and the move from, you know, from all the get a move from, from AI to AI inference. So and that gives strength to our, to our results, you know. And when we have a challenge like we had last quarter on the product line, then we have these two other product lines also that, that allow us to meet our numbers.

Gary Mobley

Okay, thank you, Luke. As my follow up, want to ask about CPU roles in AI optimized servers. There’s been a lot more noise recently indicating a higher ratio of CPUs to GPUs in AI optimized servers driven by gentic workloads. And you sort of hinted to that. To put this into a question, I’m curious if we move to a point in time where we might see a one to one ratio of CPU to gpu. Does this alter your view on the growth rate of your SAM for your product revenue or the size of it?

Luc Seraphin

So we are excited with, you know, where the market is evolving with, with agentic AI and inference. If you look at the types of architectures, software architectures, hardware architectures that inference requires, then you clearly see that the ratio between CPUs and GPUs is changing and is changing in favor of CPUs. So overall that’s a very good thing for us. It’s just coming from the nature of what inferences or what agentic AI is. So that’s a good thing for us. Is it going to be a one on one?

Very difficult to say at this point in time. Everyone is trying to optimize now the memory subsystems. Everyone is trying to use HBM where it’s really good, huge LPDDR where it’s really good, and use DDR and Mr. Dims, where it’s really good. And I would say that DDR and Mr. DIMS will continue to be, you know, the workhorse of these, you know, inference AI solutions. But the fact that all of these systems start to coexist, you know, hbm, DDR, LPDDR is really good. You know, they all try to resolve a different part of the AI workload.

And this plays to our strengths because this is what we’ve been doing for, you know, forever at Rhombus. But I would say that the move to AI inference and the move to agentic AI will change the ratio in favor of CPUs. And that’s good for us.

Gary Mobley

Thank you. Appreciate it.

Luc Seraphin

Thank you.

Operator

Your next question comes from the line of Sebastian Nabji with William Blair. Please go ahead.

Sebastien Cyrus Naji

Thank you. Maybe my first question, I wanted to ask about the new SoCam products that you announced last week. Could you maybe just comment on what Rambus’s dollar content looks like for each soccam module just across the different voltage regulators and the SPD hub? Any, any unit economics you can give us,

Luc Seraphin

You know, given the current competitive environment, you know, stay away from getting, you know, pricing on these things. But I would say that the content on, you know, on a SoCam, from the standpoint of rhombus, you know, we have three voltage regulators and an SPD hub, so the content is minimum. This is what I was saying, you know, earlier on one of the questions. I do believe that this is strategically important for us because in the long run LPDDR may play a larger role, especially in next generation LPDDR solutions in the data center.

But from the content standpoint, it stays minimal and the volume stays minimal. I would leave it there.

Sebastien Cyrus Naji

Okay. Okay, that’s fair. And maybe just turning back to the rdimms, could we get an update on the progress you’re seeing with companion chips? How much revenue came from those companionships in Q1? And then maybe just relatedly, how important is it for your Silicon customers that they have all of these DIM components bundled together coming from one provider versus having to put these together from different providers?

John Allen

Yes. Thank you,

Luc Seraphin

John. Go ahead.

John Allen

Sure. The newer products, Sebastian, they’re contributing low double digit percent of our total product revenue during the first quarter. We would expect it to be roughly the same in the second quarter as we see some growth in the overall revenue contribution from that part of our business.

Luc Seraphin

Yeah, and what I would add to this is that this is steady growth, quarter over quarter. You know, you saw this, you know, in 2025, every quarter we had a slightly Higher percentage. We continue to do that and we expect to continue to do that for the second half of the year with you know, with this and we expect maybe to exit the year at mid double digit of, of product revenue on, you know, coming from our new chips. Now to your other question. It is becoming more and more important for customers to, you know, have the whole chip set from one supplier, especially as the performance requirements increase.

And the reason has to do with interoperability. You know, making sure that all of these chips on a module work well together at very, very, very high speed in very, very harsh environment is becoming more and more difficult to achieve. And that’s why our customers request, you know, us to have the whole solution and to help them go through these generational changes.

Sebastien Cyrus Naji

Makes a lot of sense. Thank you, Luke. Thank you, John.

Luc Seraphin

Thank you.

Operator

Your next question comes from the line of Kevin Cassidy with Rosenblatt Securities. Please go ahead.

Kevin Cassidy

Yeah, thanks for taking my question. The during the quarter as you’re building inventory, were there any orders that you had to leave on the table that you weren’t able to book because you didn’t have the inventory? There may be some upside. Surprise.

Luc Seraphin

No, you know, we’ve not been in that situation, but there are a few market dynamics that we have to anticipate. One is, as I said earlier, we do see supply tightening, especially on the back end. So we want to make sure that, you know, we have, if that situation continues, we have enough supply to supply our customers. The second thing that is happening is that, you know, there’s fast transitions between generations and you remember we were talking about generation one moving to generation two. We indicated in the last call that, you know, generation three is ramping very, very fast.

So we want to make sure that on these new generation of products we also have enough inventory because the ramps, you know, on the customer side can be quite steep and we just, we just don’t want to miss them.

Kevin Cassidy

Okay, I understand. Maybe even when you’re using your balance sheet to build more inventory, it, you know, when intel reported they, they said they had even were able to some ship some previously written down inventory. You know, it seems like the demand for CPUs is so strong and also dram that you know, maybe older generations are will get a little bit of a revival. Is that anything possible or. It sounds like you’re saying everything’s shifting to Gen 3 very quickly,

Luc Seraphin

But from a demand standpoint, it’s certainly the bulk of the demand for DDR products is shifting to Gen 3. But what you’re describing in terms of, you know, using inventory of old products to serve demand is something that we continuously do and look at, that’s part of our inventory management processes.

Kevin Cassidy

Okay, great. Thank you.

Luc Seraphin

Thank you.

Operator

Your next question comes from the line of Mehdi Hosseini with sgi. Please go ahead.

Bastien Faucon-Morin

Hi Luke, this is Bastian filling in for Mehdi. My first question is on LPDDR SoCam 2 chipset. Would you mind clarifying the content of the chipset? It seems that the solution consists of one SPD and three voltage regulators. Do you expect to add any PME content there? And what does the pricing looks like of the SPD and voltage regulator relative to the DDR dimm? And I have a follow up.

Luc Seraphin

Sure. So yes, on the SOCAM solution we have one SPD hub and two types of voltage regulators. Three voltage regulators in total, but two types, one 12amp regulator and two 3amps regulators. So that’s the content. So as I said, the content is minimal. You’re talking about pmic. There’s no power management IC per se. That function is done by the voltage regulators in this generation of product. But the way, that’s why we think it’s very strategic for us, the way we look at this is that when LPDDR6 is, is available, you know, that LP memory will offer even more speed and even more, you know, power capabilities.

Then it will require, you know, possibly more complex, you know, chips for power management. And we will work on those. And one can imagine as well that, you know, as the market evolves, you know, in the longer run the market will probably need as well as the equivalent of, of, you know, of RCDs in the long run. And this all exactly in our strategy. And that’s why I’m talking about the stepping stone. We want to make sure that we are early in these new technologies. They do not cannibalize the old technology.

They are complementary to them and in the long run they have the potential to grow quite nicely and they build on strength that we have, which has to do with signal integrity and power integrity. Now in the short run for the SOCAN2 and LPDDR5X, you know, as I said, the volumes and the content, the dollar content is going to be very low. But that’s a very interesting and strategic stepping stone for us in that area.

Bastien Faucon-Morin

Thanks Luke, that’s really helpful. And I guess my second question is on DDR5, how should we think about the timing of the ramp of Gen 4 and Gen 5 as they go to higher volume manufacturing?

Luc Seraphin

So Gen 4, you know, is going to start to ramp this year. But Gen 4 is a kind of a niche generation if you wish. It doesn’t have the same traction as Gen 1, Gen 2, Gen 3 or Gen 5. I think everyone is now waiting for Gen 5. We’re going to start shipping products that correspond to Gen 5 towards the end of the year. But just like for the MRDM, Gen 5 is completely dependent on the timing of the ramps of the next generation platforms on intel and amd. This is where they’re going to be adopted and that’s why we do see initial volumes this year.

But the bulk of the volume, just like for MRD, is going to start in 2027.

Bastien Faucon-Morin

Got it. That’s very helpful. Thank you Luke.

Luc Seraphin

Thank you.

Operator

Your next question comes from the line of Mark Lapasis with Evercore isi. Please go ahead.

Unidentified Participant

Great. Thanks for taking my question. A question on the dim attach rate. Is it different for CPUs used to perform orchestration in agentic AI versus CPUs used in standard servers versus CPUs that might be, you know, put next to the GPUs and the XPUs and the custom ASICs. Should we think about the attach rates differently for these?

Luc Seraphin

It’s a very good question, very difficult question. Also, Mark, I would say that the way we look at it is if you look at inference and agentic AI, the functions that have to be performed by these, you know, standard CPUs are closer to standard CPUs. I think the highest attach rate that you would find is really close to the GPUs HBN platforms. That’s where you have the heaviest load if you wish for these CPUs that’s how this point in time I would compare it. I would say if you take a vgx box with GPUs and HBM, then the CPU, there are the CPUs that use the most memory in terms of capacity and bandwidth.

I would say that when you go to inference then it’s probably a little less. But it’s difficult for us at this point in time to model that.

Operator

Mark, your line is open.

Unidentified Participant

Hi, sorry, I guess my phone dropped and I don’t know if my question came through, but Luke, I was wondering the. This is the should we think about the DIMM attach rate differently for CPUs that would be used in orchestration for agentic AI versus CPUs used in standard servers versus CPUs that are used for inferencing that get put next to the GPUs and the ASICs and the XPUs, is there different, you know, is there different density there for the dimms?

Luc Seraphin

So it’s a very good question, Mark, but a very difficult question to answer. I would say, you know, the way we look at it at this point in time is that the highest of memory capacity and bandwidth really resides, you know, close to the GPUs and these GPU HBM clusters if you wish. That’s where, you know, you have, you have the most need for very high capacity and very high bandwidth which on average, you know, could be higher than what we found, you know, in, you know, in inference and other solutions.

But you know, we have not modeled that at this point in time. It’s hard to model, but we do see in aggregate the fact that, you know, inference is being added to training as a very good, there are good traction for, you know, the use of standard beams or Mr. Beams in, in general, the attach rate is difficult to model at this point in time.

Unidentified Participant

Got gotcha. Okay, that’s fair enough. And then the tightness, the tightness in the back end that you’re noticing, is this, do you know or can you explain what the cause of that is? Is that because of, you know, the idea that a lot of the back end happens in Southeast Asia and they procure a lot of energy from the Mideast, is that it or is it, is it capacity? Is it more like just the whole industry is in a great recovery time and the capacities, utilization rates are really ticking up. Do you have a sense of the cause of the tightness in the back end?

Luc Seraphin

There’s a couple of reasons. One is the demand, especially in the data center, you know, it’s become very high, you know, recently. So this, there’s increased demand there. And the second reason is that a lot of semiconductor suppliers have moved their back end supply chains away from China to other countries in Asia. And that has put a strain on, you know, the total capacity, you know, of these, of these back end suppliers. So it’s the combination of the two. We’ve not seen an effect yet, not yet, of, you know, the war.

You know, there are discussions about some basic elements like gas that are going to be affected, but we don’t see this yet. The main reason at this point in time is increased demand, especially in the data center, combined with, you know, semiconductor companies moving their supply chains outside of China.

Unidentified Participant

Okay, that’s really helpful. And the last question, if I may be. As you think about your market share in this year, are you of the view that you are a share gainer or you Keep share flattish or, or down. Like, what is your view on your ability to gain share? Thank you.

Luc Seraphin

Yeah, so we continue to gain share, you know, 24 to 25. You know, we were, we exited 25. We were, you know, mid 40 share. There’s no indication that, you know, we’re not going to continue on that trajectory this. This year. The market is really at a high level, transitioning from Gen 2 to Gen 3. And our footprint in Gen 3 is really, really good as well. So, you know, there’s no sign of any erosion of the share. You know, if we add the other components, then we’ll go faster than market because we add content as well to what we ship to the market.

So again, we’re very pleased with, you know, where we were in 2025. As you know, Mark, we tend to talk share on a yearly basis. You know, they can fluctuate from quarter to quarter, but we don’t see any sign of erosion of our share going into 2026.

Unidentified Participant

Gotcha. Very helpful. Thank you.

Luc Seraphin

Thank you, Mark.

Operator

At this time, there are no further questions. This concludes the question and answer session. I would now like to turn the conference back over to the company.

Luc Seraphin

Thank you everyone who has joined us today for your continued interest and time. We look forward to speaking with you again soon. Have a good day.

Operator

Thank you. This now concludes today’s conference.

Newsdesk: