X

Arista Networks, Inc (ANET) Q4 2025 Earnings Call Transcript

Arista Networks, Inc (NYSE: ANET) Q4 2025 Earnings Call dated Feb. 12, 2026

Corporate Participants:

Rudolph AraujoHead of Investor Advocacy

Jayshree UllalCEO & Chairperson

Chantelle BreithauptSenior VP & CFO

Kenneth DudaCo-Founder. President, CTO & Director

Analysts:

Meta MarshallAnalyst

Samik ChatterjeeAnalyst

David VogtAnalyst

Aaron RakersAnalyst

Amit DaryananiAnalyst

George NotterAnalyst

Benjamin ReitzesAnalyst

Timothy LongAnalyst

Kari AckermanAnalyst

Simon LeopoldAnalyst

James FishAnalyst

Tal LianiAnalyst

Adrienne ColbyAnalyst

Michael NgAnalyst

Ryan KoontzAnalyst

Presentation:

operator

Welcome to the fourth quarter 2025 ERISA Network’s financial results earnings Conference call. During the call, all participants will be in a listen only mode. After the presentation, we will conduct a question and answer session. Instructions will be provided at that time. If you need to reach an operator at any time during the conference, please press the star key followed by zero. As a reminder, this conference is being recorded and will be available for replay from the Investor Relations section on the Arista website. Following this call, Mr. Rudolph Araujo, Arista’s VP of Investor Advocacy. You may begin.

Rudolph AraujoHead of Investor Advocacy

Thank you, Regina Good afternoon everyone and thank you for joining us. With me on today’s call are Jayshree Ulal, Arista Network’s Chairperson and Chief Executive Officer, and Chantelle Breithaupt, Arista’s Chief Financial Officer. This afternoon Arista Networks issued a press release announcing the results for its fiscal fourth quarter ending December 31, 2025. If you want a copy of the release, you can access it online on our website. During the course of this conference call, Arista Networks Management will make forward looking statements including those relating to our Financial outlook for the first quarter of 2026, fiscal year, longer term business model and financial outlook for 2026 and beyond, our total addressable market and strategy for addressing these market opportunities including AI, customer demand trends, tariffs and trade restrictions, supply chain constraints, component costs, manufacturing output, inventory management and inflationary pressures on our business lead times, product innovation, working capital optimization and the benefits of acquisitions which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10Q and Form 10K and which could cause actual results to differ materially from those anticipated by these statements.

These forward looking statements apply as of today and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. This analysis of our Q4 results and our guidance for Q1 2026 is based on non GAAP and excludes all non cash stock based compensation impacts, certain acquisition required charges and other non recurring items. A full reconciliation of our selected GAAP to non GAAP results is provided in our earnings release. With that, I will turn the call over to Jai Sree.

Jayshree UllalCEO & Chairperson

Thank you Rudy and thank you everyone for joining us this afternoon for our fourth quarter and full 2025 earnings call. Well, 2025 has been another defining year for Arista with the momentum of generative AI and cloud and enterprise and we have achieved well beyond our goal at 28.6% growth driving a record revenue of 9 billion coupled with non GAAP gross margin of 64.6% for the year and a non GAAP operating margin of 48.2%. The Arista 2.0 momentum is clear as we surpassed 150 million cumulative ports of shipments in Q4.25. International growth was a good milestone in both Asia and Europe, growing north of 40% annually.

As expected, we have exceeded our strategic goals of 800 million in campus and branch expansion as well as 1.5 billion in AI center networking. Shifting to annual customer sector revenue for 2025, cloud and AI titans contributed significantly at 48%, enterprise and financials recorded at 32% while AI and specialty providers, which now includes Apple, Oracle and their initiatives as well as emerging NEO clouds performed strongly at 20%. We had two greater than 10 customer concentration in 2025. Customer A and B drove 16 and 26% of our overall business. We cherish our privileged partnerships that have spanned 10 to 15 years of collaborative engineering.

With our ever increasing AI momentum, we anticipate a diversified customer base in 2026 including one, maybe even two additional 10% customers in terms of annual 2025 product lines. Our core cloud, AI and data center products built upon our highly differentiated Arista EOS stack is successfully deployed across 10 gig to 800 gigabit Ethernet speeds with 1.6 terabit migration imminent. This includes our portfolio of EtherLink AI and our 7000 series platforms for best in class performance, power efficiency, high availability, automation, agility for both the front and back end compute storage and all of the interconnect zones. Of course we interoperate with Nvidia, the recognized worldwide market leader in GPUs, but also realize our responsibility to broaden the OpenAI ecosystem including leading companies such as AMD, Anthropic, ARM, Broadcom, OpenAI, Pure Storage and Vast Data to name a few that create the modern AI stack of the 21st century.

Arista is clearly emerging as the gold standard terabit network to run these intense training and inference models. Processing tokens at teraflops. Arista’s core sector revenue was driven at 65% of revenue. We are confident of our number one position in market share in high performance switching according to most major industry analysts. We launched our Blue Box initiative offering enriched diagnostics of our hardware platforms dubbed NetDI that can run across both our flagship EOS and our open NAS platforms. We saw an excellent uptick in 800 gig adoption in 2025, gaining greater than 100 customers cumulatively for our Etherlink products and we are co designing several AI rack systems with 1.60 switching emerging this year.

With our increased visibility we are now doubling from 2025 to 2026 to 3.25 billion in AI networking revenue. Our network adjacencies market is comprised of routing replacing routers and our cognitive AI driven AVA campus. Our investments in cognitive wired and wireless, zero touch operation, network identity scale and segmentation get several accolades in the industry. Our open modern stacking with SWAG Switched Aggregation Group and our Recent Vespa for layer 2 and layer 3 wired and wireless scale are compelling campus differentiators. Together with our recent VeloCloud acquisition in July 2025, we are driving that homogenous secure client to branch to campus solution with unified management domains.

Looking ahead, we are committed to our aggressive goal of 1.25 billion for 26 for the cognitive campus and branch. We have also successfully deployed in many routing edge core spine and peering use cases. In Q4 2025 Arista launched our flagship 7800 R4 Spine for many routing use cases including DCI AI Spines. With that massive 460 terabytes of capacity to meet the demanding needs of multi service routing AI workloads and switching use cases. The combined campus and routing adjacencies together contribute approximately 18% of revenue. Our third and final category is the network software and services based on subscription models such as Acare, Cloud Vision, Observability, Advanced Security and even some branch edge services.

We added another 350 Cloud Vision customers a day, almost one new customer a day and deployed an aggregate of 3,000 customers with Cloud vision over the past decade. Arista’s subscription based network services and software revenue contributed approximately 17% and please note that it does not include perpetual software licenses that are otherwise included in core or adjacent markets. Arista 2.0 momentum is clear. We find ourselves at the epicenter of mission critical network transactions. We are becoming the preferred network innovator of choice for client to cloud and AI networking. With a highly differentiated software stack and a uniform cloud vision software foundation, we are proud to power we Warner Brothers Distribution network streaming for 47 markets in 21 languages in the Pan European Winter Olympics.

That is happening as I speak. We are now north of 10,000 cumulative customers and I’m particularly impressed with our traction in the 5 to 10 million customer category as well as the 1 million customer category in 2025. Arista’s 2.0 vision resonates with our customers who value us for leading that transformation from incongruent silos to reliable centers of data. The data can reside as campus centers, data centers, WAN centers or AI centers regardless of their location. Networking for AI has achieved production scale with an all Ethernet based Arista AI center in 2025. We are a founding member of the Ethernet based standards for both scale up with ESUN as well as completing the Ultra Ethernet Consortium 1.0 specification for scale out AI networking.

These AI centers seamlessly connect the back end AI accelerators to the front end of Compute Storage, WAN and Classic Cloud networking. Our AI Accelerated networking portfolio consisting of three families of EtherLink spineleaf fabric are successfully deployed in and scale up, scale out and scale across networks. Network architectures must handle both training and inference frontier models to mitigate congestion. For training, the key metric is obviously job completion time, the amount of time taken between admitting a job training job to an AI accelerator cluster and the end of a training run. For inference, the key metric is slightly different.

It’s the time taken to a first token, basically the amount of latency it takes for a user submitting a query to receive their first response. Arista has clearly developed a full AI suite of features to uniquely handle the fidelity of AI and cloud workloads in terms of diversity, duration, size of traffic flow and all the patterns associated with it. Our AI for networking strategy based on AVA Autonomous Virtual Assist curates the data for higher level functions. Together with our published subscribed State foundation in EOS NETDL or Network Data Lake, we instrument our customers networks to deliver proactive, predictive and prescriptive features for enhanced security, observability and agentic AI operations.

Coupled with the Arista validated designs for network simulation, digital twin and validation functionality, Arista platforms are perfectly optimized and suited for network as a service. Our global relevance with customers and channels is increasing. In 2025 alone, we conducted three large customer events across three continents, Asia, Europe and United States and many other smaller ones. Of course, we touched four to 5,000 strategic customers and partners in the enterprise. While many customers are struggling with their legacy incumbents, Arista is deeply appreciated for redefining the future of networking. Customers have long appreciated our network innovation and quality demonstrated by our highest net promoter score of 93% and lowest security vulnerabilities in the industry, we now see the pace of acceptance and adoption accelerating in the enterprise customer base.

Our leadership team, including our newly appointed CO presidents Ken Duda and Todd Nightingale, have driven strategic and cohesive execution. Tyson Lamoreaux, our newest Senior Vice President who joined us with deep cloud operator experience has ignited our hyper growth across our AI and Cloud Titan customers. Exiting 2025 we are now at approximately 5200 employees which also includes the recent VeloCloud acquisition. I am incredibly proud of the entire Arista A team and thank you all employees for your dedication and hard work. Of course, our top notch engineering and leadership team has always steadfastly prioritized our core Arista Way principles of innovation, culture and customer intimacy.

Well, I think you would agree that 2035 has indeed been a memorable year and we expect 2026 to be a fantastic one as well. We are amid an unprecedented networking demand with massive and a growing TAM of 100 plus billion and so despite all the news on the mounting supply chain allocation, rising cost of memory and silicon fabric, increase our 2026 guidance to 25%. Annual growth accelerating now to 11.25 billion and with that happy news I turn it over to Chantel, our cfo.

Chantelle BreithauptSenior VP & CFO

Thank you Jaisree and congratulations to you and our employees on a terrific 2025. As you outlined, this was an outstanding year for the company and that strength is clearly reflected in our financial results. Let me walk through the details. To start off, Total revenues in Q4 were $2.49 billion, up 28.9% year over year and above the upper end of our guidance of 2.3 to $2.4 billion. It was great to see that all geographies achieved strong growth within the quarter. Services and subscription Software contributed approximately 17.1% of revenue in the fourth quarter, down from 18.7% in Q3, which reflects the normalization following some non recurring VeloCloud service renewal in the prior quarter.

International revenues for the quarter came in at $528.3 million or 21.2% of total revenue, up from 20.2% last quarter. This quarter over quarter increase was driven by a stronger contribution from our large global customers across our international markets. The overall gross margin in Q4 was 63.4%, slightly above the guidance of 62 to 63% and down from 64.2% in the prior year. This year over year decrease is due to the higher mix of sales to our cloud and AI titan customers in the quarter. Operating expenses for the quarter were $397.1 million or 16% of revenue, up from the last quarter at $383.3 million.

R&D spending came in at $272.6 million or 11% of revenue, up from 10.9% last quarter. Arista continued to demonstrate its commitment and focus on networking, innovation and with a fiscal year 25 R&D spend at approximately 11% of revenue. Sales and marketing expense was $98.3 million, or 4% of revenue, down from $109.5 million last quarter. FY25 closed the year with sales and marketing at 4.5%, representative of the highly efficient Arista go to market model. Our G and A cost came in at $26.3 million, or 1.1% of revenue, up from $22.4 million last quarter, reflecting continued investment in systems and processes.

To scale, Arista 2.0 for fiscal year 25G and A expense held at 1% of revenue. Our operating income for the quarter was $1.2 billion, or 47.5% of revenue. This strong Q4 finish contributed to an operating income result for fiscal year 2025 of $4.3 billion, or 48.2% of revenue. Other income and expense for the quarter was a favorable $102 million and our effective tax rate was 18.4%. This lower than normal quarterly tax rate reflected the release of tax reserves due to the expiration of the statute of limitations. Overall, this resulted in net income for the quarter of $1.05 billion, or 42% of revenue.

It is exciting to see arista delivering over $1 billion in net income for the first time. Congratulations to the Arista team on this impressive achievement. Our diluted share number was 1.276 billion shares, resulting in a diluted earnings per share for the quarter of $0.82, up 24.2% from the prior year. For fiscal year 25, we are pleased to have delivered a diluted earnings per share of $2.98, a 28.4% increase year over year. Now, turning to the balance sheet cash, cash equivalents and marketable securities ended the quarter at approximately $10.74 billion. In the quarter, we repurchased $620.1 million of our common stock at an average price of $127.84 per share.

Within fiscal 2025, we repurchased $1.6 billion of our common stock at an average Price of $100.63 per share of the $1.5 billion repurchase program approved in May 2025. 817.9 million DOL repurchase in future Quarters the actual timing and amount of future repurchases will be dependent on market and business conditions. Stock price and other factors. Now turning to operating cash performance for the fourth quarter, we generated approximately $1.26 billion of cash from operations in the period. This result was an outcome of strong earnings performance with an increase in deferred revenue offset by an increase in accounts receivable driven by higher shipments and end of quarter service renewals.

DSOs came in at 70 days, up from 59 days in Q3 driven by renewals and the timing of shipments in the quarter. Inventory turns were 1.5 times, up from 1.4 last quarter. Inventory increased marginally to $2.25 billion, reflecting diligent inventory management across raw and finished goods. Our purchase commitments at the end of the quarter were $6.8 billion, up from $4.8 billion at the end of Q3. As mentioned in prior quarters, this expected activity mostly represents purchases for chips related to new products and AI deployments. We will continue to have some variability in future quarters due to the combination of demand for our new products, component pricing such as the supply constraint on DDR4 memory and the lead times from our key suppliers.

Our total deferred revenue balance was $5.4 billion, up from $4.7 billion in the prior quarter. In Q4, the majority of the deferred revenue balance is product related. Our product deferred revenue increased approximately $469 million versus last quarter. We remain in a period of ramping our new products, winning new customers and expanding new use cases including AI. These trends have resulted in increased customer specific acceptance clauses and an increase in the volatility of our product deferred revenue balances. As mentioned in prior quarters, the deferred balance can move significantly on a quarterly basis independent of underlying business drivers accounts payable days for 66 days, up from 55 days in Q3 reflecting the timing of inventory receipts and payments.

Capital expenditures for the quarter were $37 million in October 2024. We began our initial construction work to build expanded facilities in Santa Clara and incurred approximately $100 million in CapEx during fiscal year 2025 for this project. As we move through 2025, we have gained visibility and confidence for fiscal year 2026. As Jaisree mentioned, we are now pleased to raise our 2026 fiscal year outlook to 25% revenue growth, delivering approximately $11.25 billion. We maintain our 2026 campus revenue goal of $1.25 billion and raise our AI Center’s goal from 2.75 to $3.25 billion for gross margin. We reiterate the range for the fiscal year of 62 to 64% inclusive of mix and anticipated supply chain cost increases for memory and silicon.

In terms of spending, we expect to continue to invest in innovation, sales and scaling the business to ensure our status as a leading pure play networking company. With our increased revenue guidance, we are now confident to raise the operating margin outlook to approximately 46% in 2026. On the cash front, we will continue to work to optimize our working capital investments with some expected variability in inventory due to the timing of component receipts. On purchase commitments, our structural tax rate is expected at 21.5% back to the usual historical rate, up from the seasonally Lower rate of 18.4% experienced last quarter Q4.25.

With all of this as a backdrop, our guidance for the first quarter is as follows. Revenues of approximately $2.6 billion, gross margin between 62 and 63% and operating margin at approximately 46%. Our effective tax rate is expected to be approximately 21.5% with approximately 1.275 billion diluted shares in closing at our September analyst day we had a theme of building momentum and we are doing just that. In the Campus WAN data and AI centers, we are uniquely positioned to deliver what customers need. We will continue to deliver both our world class customer experience and innovation. I am enthusiastic about our fiscal year ahead.

Now back to you Rudy for Q and A.

Rudolph AraujoHead of Investor Advocacy

Thank you Chantelle. We will now move to the Q and A portion of the Arista earnings call to allow for greater participation. I’d like to request that everyone please limit themselves to a single question. Thank you for your understanding. Regina. Please take it away.

Questions and Answers:

operator

We will now begin the Q and A portion of the Arista earnings call. To ask a question during this time, simply press Star and then the number one on your telephone keypad. If you would like to withdraw your question, press Star in the number one. Again, please pick up your handset before asking questions to ensure optimal sound quality. Our first question will come from the line of Meta Marshall with Morgan Stanley. Please go ahead.

Meta Marshall

Great. And congratulations on the quarter. I guess in terms of kind of the commentary you had Jay cherie on the one or two additional 10% customers? I guess just digging more into that. What are the puts and takes of you know, Is it bottlenecks in terms of their building? Is it like what would make, make. Or break kind of whether those become two new additional kind of 10% customers? Thank you.

Jayshree Ullal

Thank you Mita for the good wishes. So obviously if I didn’t have confidence I wouldn’t dare to say that, would I? But there’s always variables. Some of it may be sitting in deferred, so there’s an acceptance criteria that we have to meet and there’s also timing associated with meeting the acceptance criteria. Some of it is demand that is still underway and you know, in this age of all the supply chain allocation and inflation, we got to be sure we can ship.

So we don’t know if it’s exactly a 10% or high single digits or low double digits, but a lot of variables will decide that final number. But certainly the demand is there.

Meta Marshall

Great, thank you. Thank you.

operator

Our next question will come from the line at Samik Chatterjee with JPMorgan. Please go ahead.

Samik Chatterjee

Hi, thanks for taking my question. And Jessie, congrats on the quarter and the outlook. I don’t want to sort of say that the 25% growth is not impressive, but since you’re doing 30% is what the guidance is for 1Q maybe if I could understand what’s maybe sort of leading to somewhat of a cautious in terms of visibility for the rest of the year, is it these sort of one to two new customers and their ramps that you’re sort of more cautious about or is it availability of supply in region, some of the components or memory that sort of giving you maybe a bit more cautiousness about the visibility for the remainder of the year? If you could understand the drivers there.

Jayshree Ullal

Yeah. Thank you. Thank you. Thank you. Sameet. First, I don’t think I’m being cautious. I think I went all out to give you a high dose of reality, but I understand your views on caution. Given all the capex numbers you see from customers, that’s an important thing to understand that we don’t track the capex. The first thing that happens in the capex is they got to build the data centers and get the power and get all of the GPUs and accelerators and the network lags a little. So demand is going to be very good. But whether the shipments exactly fall into 26 or 27, Todd, you can clarify when they really fall in, but there’s a lot of variables there.

That’s one issue. The second, as I said, is a large amount of these are new products, new use cases highly tied to AI where customers are still in their first innings. So again, I’m giving you the greatest visibility I can fairly early in the year on the reality of what we can ship, not what the demand might be. It might be a multi year demand that ships over multiple years. So let’s Hope it continues. But of course you must understand that we’re also facing a law of large numbers. So 25% on a base of now 9 billion when we started last year at 8.25 is a really, really early and good start.

Samik Chatterjee

Thank you.

operator

Our next question will come from the line of David Vogt with ubs. Please go ahead.

David Vogt

Great. Thanks guys for taking my question. Maybe Chantal and Jerry, can you help quantify sort of both the revenue impact and potential kind of gross margin impact embedded in your guide from the memory dynamics and the constraints I know last quarter and you mentioned it this quarter, you know, obviously the supply chain does have some constraints when you think about, I think, Casey, you just said kind of the real outlook that you see, maybe can you help parameterize, you know, what you think could hold you back, if that’s the way to phrase it and just give us a sense for what upside could be in a perfect world, effectively.

If you could share that, I’m going.

Jayshree Ullal

To give some general commentary and Shanta, if you don’t mind adding to it, you know, our peers in the industry have been facing this probably longer than we have because I think the server industry probably saw it first because they’re more memory intensive. Add to that that we’re expecting increases from the silicon fabrication, that all the chips are made as you know, essentially with one company, Taiwan Semiconductor. So Arista has taken a very thoughtful approach being aware of this since 2025 and frankly absorbed a lot of the costs in 2025 that we were incurring. However, in 2026 the situation has worsened significantly.

We’re having to smile and take it just about at any price we can get and the prices are horrendous, they’re an order of magnitude exponentially higher. So clearly with the situation worsening and also expected to last multiple years, we are experiencing shortages in memory, thankfully, as you can see reflected in our purchase, are planning for this. And I know that memory is now the new gold for the AI and automotive sector sector, but clearly it’s not going to be easy. But it’s going to favor those who planned and those who can spend the money for it.

Chantelle Breithaupt

Yeah, and I think, I think the only thing I’d add to your question, David, and thank you for that, is that so we’re, we’re comfortable in the guide and that’s why we have the guide, why we raised the numbers that we did. So we’re comfortable, we have a path to there within the numbers we provided the range of 62, 64, I think we are pleased to hold, despite this kind of pressure coming into it. You know, this has been our guide since September at our analyst day. So we’re pleased to hold that guide and find ways to mitigate this, you know, this journey.

Now, whether it ends up being, you know, 62.5 versus 63.5 in the guide in that range, that’s, that’s where we’ll, we’ll continue to update you, but the range we’re comfortable with.

David Vogt

Understood. Thanks guys.

Jayshree Ullal

Thank you, David.

operator

Our next question comes from the line of Aaron Rakers with Wells Fargo. Please go ahead.

Aaron Rakers

Yeah, thanks for taking the question and congrats as well on the quarter and a guide. I guess when we think about the 3.25 billion guide for the AI contribution this year, I’m curious JSARI, how much you’re factoring, if any from scale up networking opportunity. How do you see is that more still of a 27 and also can you unpack like X the AI and X the campus contribution? It appears that you’re guiding still pretty muted low single digit growth on non AI. Just curious to how you see the non AI on campus growth.

Jayshree Ullal

Okay, yeah, well, you know, rising tide rises all boats, but some go higher and some go lower. But to answer your specific question, what. Was it on. Scale up? We have consistently described that today’s configurations are mostly a combination of scale out and scale up. We’re largely based on 800 gig and smaller Radix now that the ESUN specification is well underway and Kenduda, I think the spec will be done in a year or this year for sure. So Ken and Hugh Holbrooke are actively involved in that. We need a good solid spec, otherwise we’ll be shipping proprietary products like some people in the world do today. And so we will tie our scale up commitment greatly to availability of new products and a new ESUN spec which we expect the earliest to be Q4 this year and therefore majority of the we’ll be in some trials where a lot of Andy Vectorshein and the team is working on a lot of active AI racks with scale up in mind.

But the real production level will be in 2027, primarily centered around not just 800 gig but 1.6 T.

Chantelle Breithaupt

And I think that.

Jayshree Ullal

Thank you, Erin.

operator

Our next question will come from the line of Amit Daryanani with Evercore isi, please go ahead.

Amit Daryanani

Yep, thanks a lot and congrats from my end as well for some really good numbers here. Jashi, if I think some of these model builders like Anthropic that I think you folks have talked about. They’re starting to build these multibillion dollar clusters on their own. Now. Can you talk about your ability to participate in some of these build outs as they happen, be that on the DCI side or maybe even beyond that. And by extension, does this give you an opportunity to ramp up with some of the larger cloud companies that these model builders are partnering with over time as well as they build out TP or Trainium clusters? I’d love to just understand how that kind of business scales up for you folks.

Thank you.

Jayshree Ullal

Yeah, no Amit, that’s a very thoughtful question and I think you’re absolutely right. The network infrastructure is playing a critical role with these model builders in a number of ways. If you look at us initially we were largely working with, you know, one or two model builders and one or two accelerators. Nvidia and AMD and OpenAI was the primarily dominant one. But today we see that there’s really, you know, multiple layers in a cake where you’ve got the GPU accelerators, of course you’ve got power as the most difficult thing to get. But Arista needs to deal with multiple domains and model builders and appropriately, whether it is Gemini or XAI or anthropic Claude or OpenAI and many more coming, these models and the multi protocol algorithm or nature of these models is something we have to make sure we build the network correctly for.

So that’s one. And then to your second point, you’re absolutely right. I think the biggest issue is not only the model builders, but they’re no more in silos in one data center and you’re going to see them across multiple colos and multiple locations and multiple partnerships with our cloud titan customers that we’ve historically not worked with this. So I think you’ll see more copilot versions of it, if you will, with a number of our cloud titans. So we expect to work with them as AI specialty providers, but we also expect to work with our cloud titans in bringing the cloud and AI together.

Amit Daryanani

Thank you.

Jayshree Ullal

Thank you, Amit.

operator

Our next question comes from the line of George Nader with Wolf Research. Please go ahead.

George Notter

Hi guys, thanks very much. I was just curious about the product deferred revenue and how you see that coming off the balance sheet. Ultimately, obviously it’s just been stacking up here quarter after quarter after quarter. So a few questions here. Does that come off in big chunks that we’ll see, you know, different quarters in the future? Does it come off more gradually? Does it continue to build like what does the profile look like for that product? Deferred, coming off the balance sheet and pulling through the P and L. And then also I’m curious about how much product deferred do you have in the full year revenue guidance? The 25%.

Thanks a lot.

Jayshree Ullal

Yeah. Hey George, thanks for the questions. Not much has changed in the sense of how we have this conversation. What goes into deferred is new product, new customers, new use cases. The great new use case is AI. The acceptance criteria for that for the larger deployments is 12 to 18 months. Some can be as short as six months. So there’s wide variety that goes in. Deferred has balances coming in and out every quarter. We don’t guide deferred and we don’t say product specific. What I can tell you in your questions is that there will be times where there are larger deployments that will feel a little lumpier as we go through.

But again, it’s a net release of a balance. So it depends what comes in at that same quarter. Timing.

George Notter

Got it. Okay. Any sense for what’s in the full year guide then? I assume not much. Is that fair to say?

Jayshree Ullal

It’s super hard, George. It’s when the acceptance criteria happens, you know, if it happens December 32, it’s a different situation. If it all happens in, you know, Q2, Q3, Q4, that’s a different. So that’s something we really have to work with the customer. So sorry that we’re not able to be clairvoyant on.

George Notter

That makes sense. Thank you.

Jayshree Ullal

Thank you.

Chantelle Breithaupt

Thank you.

operator

Our next question comes from the line of Ben Reitzes with Melius Research. Please go ahead.

Benjamin Reitzes

Hey, thanks a lot and I guess my congrats too guys. You know you. This execution and guide is really something. So I wanted to thank you, Ben. You’re welcome. I wanted to ask about two things that I just was wondering if you could talk a little bit more about your NEO cloud momentum and what that is looking like in terms of materiality. And then also if you don’t mind touching on AMD with the launch, we’re kind of hearing about you getting a lot of networking attached to the 450 type product or their new chips. Wondering if that is a catalyst or not as you go throughout the year.

Thanks so much.

Jayshree Ullal

Yeah. So Ben, as you can imagine, the specialty cloud providers have historically had a cacophony of many types of providers. We are definitely seeing AI as one of the clear inPass. It used to be content providers, tier 2 cloud providers, but AI is clearly driving that section and it’s a suite of customers, some of who have real financial strength and are looking now to invest and increase and pivot to AI. So the rate at which they pivot in AI will greatly define how well we do that. And you know, they’re not yet titans, but they want to be or could be titans is the way to look at it.

So and we’re going to invest with them and these are healthy customers. It’s nothing like the.com era. We feel good about that. There are a set of NEO clouds that we watch more carefully because some of them are, you know, oil money converted into AI or crypto money converted into AI. And over there we are going to be much more careful because some of those near clouds are, you know, are looking at Arista as the preferred partner, but we would also be looking at the health of the customer or they may just be a one time, we don’t know the exact nature of their business and those will be smaller and they don’t contribute in large dollars.

But they are becoming increasingly plentiful in quantity even if they’re not yet in numbers. So I think you’re seeing this dichotomy of two types in that category or three types, the classic CDN and security specialty providers, tier 2 cloud, the AI specialty are going to lean in and invest and then the NEO clouds in different geographies.

Benjamin Reitzes

The amd.

Jayshree Ullal

Oh yes, the AMD question. You know, a year ago I think I said this to you, but I’ll repeat it. A year ago it was pretty much 99% Nvidia. Right. Today when we look at our deployments we see about 20, 20%, maybe a little more 20 to 25% where AMD is becoming the preferred accelerator of choice. And in those scenarios ARISCA is clearly preferred because they’re building best of breed building blocks for the nic, for the network, for the IO and they want open standards as opposed to full on vertical stacks from one vendor. So you’re right to point out that AMD and in particular it’s a joy to work with Lisa and Forrest and the whole team and we do very well in that multi vendor open considerations.

operator

Our next question will come from the line of Tim Long with Barclays. Please go ahead.

Timothy Long

Thank you. Yeah, appreciate all the color J3. Maybe we could touch a little bit on scale across. It’s obviously gotten a lot of attention, particularly on the optics layer from some others in the industry. Obviously you guys have been in DCI which is kind of a similar type Technology. But curious what you think as far as Arista’s participation in more of these next gen scale across networks. And is this something that would be good for like a blue box type of product or would that more be in the scale up? So if you could give a little color there, that would be great.

Jayshree Ullal

Right? Okay. So most of our participation today we thought would be scale out, but what we are finding is due to the distributed nature of where and how they can get the power and the bisectional bandwidth growth where essentially the throughput scale out or scale across is all about how much data you can move. Right. As the workloads become more and more complex, you have to make them more and more distributed because you just can’t fit them in one data center, both from a power bandwidth throughput capacity. Also these GPUs are trying to minimize the collective degradation.

So as you scale up or out, the communication patterns become very, very much of a bottleneck. And one way to solve it is to extend this across data centers, both through fiber and as you rightly pointed out, a very high injection bandwidth DCI routing. And then there’s a sustained real world utilization you need across all of these. So for all these reasons we are pleasantly surprised with the role of coherent long haul optics which we don’t build. But we have worked in the past very greatly with companies that do and they’re seeing the lift and the 7800 spine chassis as the flagship platform and preferred choice that has been designed by our engineering team now for several years for this robust configuration.

So less blue box there and much, much more of a full on Arista flagship box with us and all of the virtual output queuing and buffering to interconnect regional data centers with extremely high levels of routing and high availability too. So this really lends into everything Arista stands for, coming all together in a universal AI spine.

Timothy Long

Okay, excellent. Thank you. J Sri.

Jayshree Ullal

Thank you.

operator

Our next question will come from the line of Kari Ackerman with BNP Paribas. Please go ahead.

Kari Ackerman

Yes, thank you. Agentech AI should support an uptake in conventional server CPUs where your switches have high share within data centers. And so given your upwardly Revised outlook of 25% growth for this year, could you speak to the demand prospects you are seeing for front end high speed switching products that address agentic AI products? Thank you.

Jayshree Ullal

Yeah, exactly, Carl. I think in the beginning, let’s just go back in time in history, it’s not that long ago, three years ago, we had no AI. We were staring at Infiniband being deployed everywhere in the back end and we pretty much characterized our AI as the only back end just to be pure about it. Right. Three years later I’m actually telling you we might do north of 3 billion this year and growing. Right. That number definitely includes the front end as it’s tied to the back end GPU clusters and it’s an all ethernet all AI systems for agentic AI applications.

Now a lot of the agency AI applications are mostly running with some of our largest cloud AI and specialty providers. But I don’t rule out the possibility you could see this in our numbers with north of 8,800 gig customers that many of that is going to feed into the enterprise as well as agentic AI applications come for genomic sequencing science, you know, automation of software. I don’t know, I don’t think can any of us believe that AI is eating software, but AI is definitely enabling better software. Right. And we’re certainly seeing that in Ken’s team as well in our doctrine of that.

So the rise of agentic AI will only increase not just the GPU but all gradations of XPU that can be used in the back end and front end.

Kari Ackerman

Thank you.

Jayshree Ullal

Thank you. Kaal.

operator

Our next question comes from the line of Simon Leopold with Raymond James. Please go ahead.

Simon Leopold

Thank you very much for taking the question. I wanted to come back on the issue around sort of what’s going on with the memory market. So two aspects. This is one, I’m wondering how much of a tool has been price hikes. You raising your prices to customers or or endor whether or not within the substantial amount of purchase commitments you have, whether there’s a significant aspect of memory in there. So you’ve pre purchased memory effectively at much lower prices than their spot market today. Thank you.

Jayshree Ullal

Thank you. Okay, I wish I could tell you we did purchase all that memory that we needed. No we didn’t. But while our peers in the industry have done multiple price hikes already, especially those in the server market or memory intensive switches, we have clearly been absorbing it and memory is in our purchase commitments but so is everything else. The entire silicon portfolio is in our purchase commitments due to some of the supply chain reactions. Todd and I have been reviewing this and we do believe there will be a one time increase on selected especially memory intensive SKUs to deal with it and we cannot absorb it if the prices keep going up the way they have in January and February.

And I would tell you that all the purchase commitments I have in my Current in Chantelle’s current commitments are not enough. We need more memory.

Simon Leopold

Thank you.

operator

Our next question will come from the line of James Fish with Piper Sandler. Please go ahead.

James Fish

Hey ladies, great quarter. Great end of the year, J Street, our hyperscalers getting nervous now at all and ordering ahead. What’s your sense of pull in of demand potentially here, including for your own blue box initiative? And Chantal, for you just going back to George’s question, are you. I know it’s difficult to answer, but are you anticipating that that product deferred revenue is going to continue to grow through the year or just. It’s way too difficult to predict and you’ve got customers that could just say, you know, we accept, ship them all now. And so we end up with a, a big quarter but deferred down.

Jayshree Ullal

I’m going to let Chantelle answer the difficult question over and over again.

Chantelle Breithaupt

Happy. Thank you, James. I appreciate it. So I think for deferred generally is so we, we don’t guide deferred. But to try to give you more insight, there will be. Back to George’s question. There will be certain deployments that get accepted and release. But the part that’s difficult is what comes into the balance. Right, James? So I can’t guide. That would be. That would be a wild guess on what’s going to go in, which is not prudent, I think, from my perspective. So we’ll continue to mention what’s in it. We’ll continue to show you through the balances.

We’ll talk about it in the script in the sense of the movement. But that’s probably as much as I can tell you with, you know, with a responsible answer. Looking forward.

Jayshree Ullal

James, this is one of those times. No matter how many times you ask us this question in several different ways, the answer doesn’t change. Okay.

James Fish

I mean we’re all. Insanity is doing the same thing over and over again.

Jayshree Ullal

Yeah. So on the hypershale, are they getting nervous? I don’t think they’re getting nervous. You know, you’ve seen what a strong business they have, how much cash they put out and how successful they are. But I do think they’re working more closely with us. Typically we had three to six month visibility. We’re getting greater than.

operator

Our next question will come from the line of Tal Leiani with Bank of America. Please go ahead.

Tal Liani

Hi guys. I almost had the same question to you. What I asked you last quarter because you grew. You increased the guidance. Yeah, no, it’s. I’ll explain. You increase the guidance but the entire increase in the guidance is basically the cloud. And if I look at, it’s very simple to dissect your numbers. If I remove campus and I remove cloud and you provide these two numbers for both 25 and 26, the rest of the business, which is 60% of the business, you guide it to grow. 0. And in previous years it was, I can make estimates, it was anywhere from 10% to 30% growth. So the question is, why are you guiding this way that 60% of the business is not going to grow? Is it because the conservative.

Jayshree Ullal

No. Can I pause you there? Because I know you like to dissect our map several different ways and come up with conclusions. We’re not guiding that our business is going to be flat or we’re not going to grow here or grow there. But generally when something is very fast paced and growing, then other things grow less. And exactly whether it will be flat or grow double digits or single digits tall. It’s February. I don’t know what the rest of the year will be. Okay, so

Tal Liani

that’s the question. The question is, is there a location here? Meaning if you, let’s say you have only set number of, you know, memory slots, so you allocate it to cloud and then the rest of the business doesn’t get. Is it just conservatism and lack of ability?

Jayshree Ullal

It’s neither of the above. We don’t allocate to our customers. It’s first in, first served. And in fact, the enterprise customers get a very high sense of priority, as do our cloud. Customers come first. But allocation of memory may allow us to be in a situation where the demand is greater than our ability to supply. We don’t know it’s too early in the year. We’re confident that we can guide six months after analyst day to a higher number, but we don’t know what the next four quarters will look like to the precision you’re asking for.

Tal Liani

Got it. Thank you.

Jayshree Ullal

Thank you.

operator

Our next question comes from the line of Autif Malik with Citi. Please go ahead.

Adrienne Colby

Hi, it’s Adrienne Colby for autif. Thank you for taking my question. I was hoping to ask about for an update on ERISA for large AI customers. I know that that fourth customer you talked about was a bit slower to ramp to 100,000 GPUs. Just wondering if you can update us on their progress there and perhaps what’s next for the other few customers that have already crossed that threshold. And lastly, is there any indication that. That fifth customer that ran into funding challenges. Might come back to you.

Jayshree Ullal

Okay, Adrian, I’ll give you some update. I’m not sure I have precise updates, but we are in all four customers deploying AI with Ethernet. So that’s the good news. Three of them have already deployed a cumulative 100,000 GPUs and are now growing from there and clearly migrating now into beyond pilots and production to other centers, power being the biggest constraint. Our fourth customer is migrating from InfiniBand. So it’s still below 100,000 GPUs at this time, but I fully expect them to get there this year and then we shall see how they get beyond that.

operator

Our next question will come from the line of Michael Ng with Goldman Sachs. Please go ahead.

Michael Ng

Hey, good afternoon. Thank you for the question. I just have one and one follow up. First, I was wondering if you could talk a little bit about the new customer segmentations that you guys unveiled with cloud and AI, and AI and specialty. What’s the philosophy around that? And does that kind of signal more opportunity in places like Oracle and the NeoClouds? And then second, with Cloud and AI at 48% of revenue and A and B at a combined 36, you have 12% left over. Is that a hyperscale customer? Does it kind of imply that you have a new hyperscaler that is approaching 10%? Because obviously we thought that the next biggest one would have been Oracle, but that’s moved out of cloud now.

So any thoughts There would be great. Thank you.

Jayshree Ullal

Yeah, sure, Michael. Well, first of all, My math is 26 to 16, so it’s 42, so I don’t have 12%. Unless you had 58, it’s really only 6%. So on the cloud and AI tightness, the way we classified that is it’s significantly large scale customers with greater than a million servers, greater than 100,000 GPUs, an R& D focus on models and sometimes even their own XPUs. And this can of course change. Some others may come into it, but it’s a very select few set of customers, you know, less than five or about five. That’s the way to think of it.

Right. On the, on the change on the specialty cloud, as I said, we’re noticing that some customers are really, really focused solely on AI with some cloud as opposed to cloud with some AI. So when it’s a heavily AI centric, especially with Oracle’s AI Acceleron and multi tenant partnerships that they’ve created, they have naturally got a dual personality, some of which is oci, the Oracle Cloud, but some of it is really AI, fully AI based. So the shift in their strategy made us shift the category and bifurcate the two.

Michael Ng

Thank you Jaishree.

Jayshree Ullal

Thank you

Rudolph Araujo

Regina. We have time for one last question.

operator

Our final question will come from the line of Brian Kuntz with Needham and Company. Please go ahead.

Ryan Koontz

Great. Thanks for squeezing me in. Jay Sree, in your prepared remarks you talked about your telemetry capabilities and wonder if you could expand on that and discuss where you see in that key differentiation, what sorts of use cases you’re able to really seize the upper hand compatibly with your telemetry capabilities. Thank you.

Jayshree Ullal

Yeah, I’m gonna say some and I think Ken, who’s been designing this and working on it will say even more Ken, due to our president and ct. So telemetry is at the heart of both our EOS software stack as well as our cloud vision for enterprise customers. We have a real time streaming telemetry that has been with us since the beginning of time and it’s constantly keeping track of all our switches. It isn’t just a pretty management tool and at the same time our cloud customers and AI customers are seeking some of that visibility too.

And so we have developed some deeper AI capabilities for telemetry as well. Over to you Ken for some more detail.

Kenneth Duda

Yeah, no, thanks for that question, that’s great. Look, the EOS architecture is based on state orientation. This is the idea that we capture the state of the network and then stream that state out from the system database on the switches into whatever the television or whatever system can then receive it. And we’re extending that capability for AI with a combination of in network data sources related to flow control, RDMA counters, buffering and congestion counters, and also host level information including what’s going on in the RDMO stack on the host, what’s going on with collectives, latencies, any flow control problems or buffering problems in the host nic.

Then we pull those that information all together in cloud vision and give the operator a unified view of what’s happening in the network and what’s happening in the host. And this greatly aids our customers in building an overall working solution. Because the interactions between the network and the host can be complicated and difficult to debug when it’s different systems collecting them.

Jayshree Ullal

Great job Ken. I can’t wait for that product.

Ryan Koontz

Thank you.

Rudolph Araujo

This concludes Arista Network’s fourth quarter 2025 earnings call. We have posted a presentation that provides additional information on our results which you can access in the investor section of our website. Thank you for joining us today and for your interest in Arista.

operator

Thank you for joining. Ladies and gentlemen, this concludes today’s call. You may now disconnect.

Newsdesk: