DOCN (2025 - Q2)

Release Date: Aug 05, 2025

...

Stock Data provided by Financial Modeling Prep

Current Financial Performance

DigitalOcean Q2 2025 Highlights

$219M
Revenue
+14%
$89M
Adjusted EBITDA
+10%
$57M (26% of revenue)
Adj. Free Cash Flow
$0.59
Non-GAAP EPS
+23%

Key Financial Metrics

Profitability & Margins

60%
Gross Margin
41%
Adjusted EBITDA Margin
26%
Adj. Free Cash Flow Margin
99%
Net Dollar Retention

Customer Count

Approx. 638,000

Builders & Scalers grew 7% YoY

ARPU

$99.45
9%

Period Comparison Analysis

Revenue

$219M
Current
Previous:$211M
3.8% QoQ

Revenue

$219M
Current
Previous:$192.5M
13.8% YoY

Adjusted EBITDA

$89M
Current
Previous:$86M
3.5% YoY

Adjusted EBITDA Margin

41%
Current
Previous:42%
2.4% YoY

Gross Margin

60%
Current
Previous:61%
1.6% YoY

Non-GAAP EPS

$0.59
Current
Previous:$0.56
5.4% YoY

Net Dollar Retention

99%
Current
Previous:100%
1% YoY

Earnings Performance & Analysis

AI/ML Revenue Growth

100%+ YoY

Strong AI/ML demand

Scalers+ Revenue Growth

35% YoY

24% of total revenue

Incremental ARR

$32M

Highest in 3+ years

Financial Guidance & Outlook

2025 Revenue Guidance

Actual:$888M - $892M
Estimate:Prior guidance lower
0

2025 Adjusted EBITDA Margin Guidance

Actual:39% - 40%
Estimate:Prior guidance lower
0

2025 Adj. Free Cash Flow Margin Guidance

Actual:17% - 19%
Estimate:Prior guidance lower
0

Q3 2025 Revenue Guidance

$226M - $227M
Current

Q3 2025 Adj. EBITDA Margin Guidance

39% - 40%
Current

Q3 2025 Non-GAAP EPS Guidance

$0.45 - $0.50
Current

Surprises

Revenue Beat

+14%

$219 million

The growth momentum from Q1 continued into the second quarter with revenue of $219 million, growing 14% year-over-year.

AI/ML Revenue Growth

100%+ year-over-year

We saw excellent strength in our AI/ML business with revenue growing north of 100% year-over-year.

Incremental ARR

$32 million

We achieved incremental ARR in the second quarter of $32 million, our highest incremental ARR since Q4 of 2022 and the highest organic incremental ARR in over 3 years.

Adjusted Free Cash Flow Margin Increase

26% of revenue

Adjusted free cash flow of $57 million, which is 26% of revenue, up significantly from Q1.

Non-GAAP Diluted EPS Increase

+23%

$0.59

Non-GAAP diluted net income per share was $0.59, a 23% increase year-over-year.

GAAP Diluted EPS Increase

+95%

$0.39

GAAP diluted net income per share was $0.39, a 95% increase year-over-year.

Impact Quotes

We provide a mature, complete general-purpose cloud and on the other stack, a modern agentic AI cloud. These integrated stacks enable AI native customers to run inferencing at scale while taking advantage of the core cloud modules.

We are confident in our ability to maintain attractive free cash flow margins while we accelerate our top line growth.

The growth continued to come with healthy profitability, including adjusted free cash flow of $57 million, which is 26% of revenue.

Our incremental ARR of $32 million was the highest incremental ARR since Q4 of 2022 and the highest organic incremental ARR achieved in over 3 years.

The Gradient AI Platform is a one-of-a-kind platform that caters to the end-to-end agent development life cycle, enabling AI native, SaaS and any software application customer to build, test, deploy, monitor and operate agentic AI software.

We remain committed to fully addressing the 2026 convertible debt prior to the end of this calendar year.

We are seeing increasing momentum with AI native companies with larger scale inferencing workloads and expanding partnerships with key ecosystem players in the AI domain.

We have enough levers going that we're very confident in our ability to improve the incremental ARR metric and sustain growth.

Notable Topics Discussed

  • DigitalOcean achieved its highest organic incremental ARR of $32 million since Q4 2022, marking the strongest growth in over three years.
  • This growth was driven by product innovation, new customer acquisition, and expansion in AI/ML workloads, with no unusual capacity additions or seasonal effects.
  • DigitalOcean launched the Gradient AI Platform to general availability, enabling customers to develop, test, deploy, and monitor AI agents with built-in safety and guardrails.
  • The platform supports over 14,000 AI agents created since launch, nearly doubling last quarter, with more than 6,000 customers since January, including new AI-native companies.
  • The new Atlanta data center, the largest and newest, is purpose-built for high-density GPU infrastructure optimized for AI inferencing.
  • It offers a complete AI stack, including compute, storage, and advanced networking features like BYOIP and NAT gateways, facilitating large-scale AI workloads and migration support.
  • DigitalOcean expanded its GPU lineup with AMD Instinct MI325X GPUs, complementing NVIDIA offerings, to deliver high performance at lower TCO for AI inferencing.
  • Partnerships include powering AMD Developer Cloud, enabling developers to test AMD GPUs in a managed environment, democratizing access to AI hardware.
  • AI/ML revenue grew over 100% year-over-year, driven mainly by infrastructure sales, with increasing adoption of inference-optimized GPU droplets.
  • Customers like Featherless.ai and ScribeAI are leveraging GPU droplets for inference and training, indicating strong demand for AI workloads on DigitalOcean’s platform.
  • DigitalOcean emphasizes its unique twin-stack approach: a general-purpose cloud and an AI agentic cloud, enabling AI-native companies to run inference at scale and integrate AI into applications seamlessly.
  • Management highlighted this as a key differentiator in attracting marquee AI customers and expanding workloads.
  • The company is securing large, multiyear contracts, including a multiyear $20 million+ deal, and facilitating 76 workload migrations in Q2, including a notable cybersecurity provider, Xcitium.
  • These initiatives are part of a new go-to-market motion targeting digital native enterprise customers.
  • DigitalOcean maintained healthy profitability metrics, with adjusted free cash flow of $57 million (26% of revenue) in Q2, and raised full-year free cash flow guidance to 17-19%.
  • The company is balancing AI infrastructure investments with margin preservation, expecting modest headwinds from AI growth in gross margins next year.
  • Management emphasized a conservative stance on large deal revenue recognition, noting that AI workloads are in early stages and will take time to impact NDR.
  • Customer lifecycle on AI workloads is different, with many customers in scaling phases, and revenue from these deals is expected to be lumpy and spiky initially.
  • DigitalOcean’s growth is driven by improved customer acquisition, especially in the first 12 months, and a focus on expanding high-spend customers and AI workloads.
  • Management highlighted the importance of cohort analysis and the potential for continued acceleration in core cloud and AI revenue streams.

Key Insights:

  • Adjusted EBITDA margin guidance for Q3 and full year 2025 was raised to 39%-40%.
  • DigitalOcean raised its full year 2025 revenue guidance to $888 million to $892 million, representing approximately 14% year-over-year growth at the midpoint.
  • Management remains confident in sustaining and building momentum in the second half of 2025, supported by strong customer usage visibility, migration pipeline, and AI traction.
  • Non-GAAP diluted EPS guidance for Q3 is $0.45 to $0.50 and for full year 2025 is $2.05 to $2.10.
  • Q3 2025 revenue is expected to be $226 million to $227 million, about 14.1% year-over-year growth at midpoint.
  • The company expects gross margins to remain consistent through the balance of 2025, with a modest headwind possible in 2026 due to AI business growth.
  • DigitalOcean launched its Atlanta data center, the largest and newest, purpose-built for high-density GPU infrastructure optimized for AI inferencing.
  • Expanded Gradient AI Agentic Cloud with 8 GPU Droplet types including NVIDIA H, L, RTX series and AMD Instinct series GPUs, plus an inference-optimized GPU Droplet.
  • Facilitated 76 migrations from hyperscalers and other clouds during the quarter, supported by a dedicated migrations team.
  • Introduced advanced networking features in public preview: Bring Your Own IP (BYOIP) and Network Address Translation (NAT) gateways to support larger enterprise workloads.
  • Introduced Cloudways Copilot, the first commercial AI agent for real-time server monitoring and automated issue resolution.
  • Launched the Gradient AI Platform to general availability, enabling easy development and deployment of AI agents with built-in safety and scalability features.
  • Released over 60 new products and features targeting higher spend customers, with 64 of top 100 customers adopting a product or feature released in the last year.
  • Strengthened partnerships with AMD, powering the AMD Developer Cloud and providing customers access to AMD Instinct GPUs.
  • CEO Paddy Srinivasan emphasized the twin stack cloud strategy combining a mature general-purpose cloud with a modern agentic AI cloud as a key differentiator.
  • CFO Matt Steinfort noted the balance between AI and core cloud growth contributing to the highest incremental ARR in company history.
  • Management highlighted the secular and durable momentum in new customer acquisition driven by product-led growth, migration motion, and AI inferencing customers.
  • Management is confident in maintaining attractive free cash flow margins while accelerating revenue growth, with flexibility to increase growth investments if opportunities arise.
  • Management remains conservative in forecasting large deals due to their newness and lumpy nature but is encouraged by early successes and pipeline.
  • The company is focused on capital allocation priorities: driving organic growth, addressing 2026 convertible debt, and managing share repurchases to offset dilution.
  • AI customers tend to be new and do not yet contribute to NDR; inferencing workloads are expected to be incorporated into NDR metrics in the future.
  • AI/ML revenue grew over 100% year-over-year, driven mainly by the Gradient AI infrastructure stack, with growing adoption of the AI platform and agents.
  • Capital allocation prioritizes organic growth and debt repayment over share repurchases, which have been reduced recently.
  • Core cloud business continues to accelerate with low double-digit growth, driven by new customer acquisition and migration motions.
  • Gross margins in AI business are encouraging, with higher layers of the AI stack commanding better margins than pure infrastructure.
  • Incremental ARR of $32 million was balanced across AI and core cloud, with AI ARR growth previously exceeding 160% in prior quarters but now facing tougher comps.
  • Large deals are a new sales muscle; management expects lumpiness and conservatism in forecasting but sees promising pipeline and early wins.
  • Management expects gross margins to remain stable in 2025 with modest headwinds possible in 2026 due to AI business growth.
  • Net dollar retention (NDR) was 99%, slightly up from 97% last year, with some customers cautious and others accelerating, reflecting a mixed but stable environment.
  • RPO growth is driven by both core cloud and AI, with average contract durations around 1 to 2 years.
  • DigitalOcean continues to evaluate its valuation allowance on deferred tax assets, with a potential release of $109 million in the latter half of 2025, which would positively impact net income as a noncash event.
  • DigitalOcean is investing in infrastructure and model optimization to scale inferencing workloads efficiently.
  • Pricing dynamics for GPUs differ between training and inferencing workloads, with customers prioritizing price performance over raw throughput.
  • Share repurchases totaled $20 million in Q2 2025, with cumulative repurchases since IPO at $1.6 billion.
  • The company has multiple financing options available to address the 2026 convertible debt, including convertible debt, bank debt, and bonds.
  • The company is focused on maintaining a strong balance sheet with $388 million in cash at quarter end.
  • Customers like Quickest and Mint Media are leveraging DigitalOcean's AI platform and agents to enhance their AI-powered applications and operational efficiency.
  • Management expects the AI business to become an increasingly meaningful portion of revenue in 2026, complementing the core cloud business.
  • Migration motion is a relatively new go-to-market strategy that is beginning to bring in digital native enterprise customers from other cloud providers.
  • The company is focused on delivering a unified cloud stack that supports both general-purpose and AI-native workloads, providing a competitive edge.
  • The company is seeing increasing momentum with AI native companies with large-scale inferencing workloads and expanding partnerships in the AI ecosystem.
  • The Gradient AI Platform has seen over 14,000 agents created since launch, with more than 6,000 customers leveraging it since January, 30% of whom are new to DigitalOcean.
Complete Transcript:
DOCN:2025 - Q2
Operator:
Ladies and gentlemen, thank you for standing by. My name is Krista, and I will be your conference operator today. At this time, I would like to welcome everyone to DigitalOcean's Second Quarter 2025 Earnings Conference Call. [Operator Instructions] And I would now like to turn the conference over to Melanie Strate, Head of Investor Relations. Melanie, you may begin. Melanie
Melanie Strate:
Thank you, and good morning. Thank you all for joining us today to review DigitalOcean's Second Quarter 2025 Financial Results. Joining me on the call today are Paddy Srinivasan, our Chief Executive Officer; and Matt Steinfort, our Chief Financial Officer. Before we begin, let me remind you that certain statements made on the call today may be considered forward-looking statements, which reflect management's best judgment based on currently available information. Our actual results may differ materially from those projected in these forward-looking statements, including our financial outlook. I direct your attention to the risk factors contained in our filings with the SEC as well as those referenced in today's press release that is posted on our website. DigitalOcean expressly disclaims any obligation or undertaking to release publicly any updates or revisions to any forward-looking statements made today. Additionally, non-GAAP financial measures will be discussed on this conference call and reconciliations to the most directly comparable GAAP financial measures can be found in today's earnings press release as well as in our investor presentation that outlines the financial discussion on today's call. A webcast of today's call is also available in the IR section of our website. And with that, I will turn the call over to Paddy.
Padmanabhan T. Srinivasan:
Thank you, Melanie. Good morning, everyone, and thank you for joining us today as we review our second quarter 2025 results. We continue to make meaningful progress on the strategy we laid out at our Investor Day back in April. This is evidenced by our strong second quarter results and supported by the fact that we are raising our full year guidance on both revenue and profitability metrics. My comments today will include a recap of our Q2 financial results and an update on both our progress in product innovation and our enhanced go-to-market strategy across both core cloud and AI, which are enabling over 174,000 digital native enterprise customers to scale on our platform. Let me start with the second quarter financial results highlighted on Slide 10 of our earnings deck. The growth momentum from Q1 continued into the second quarter with revenue of $219 million, growing 14% year-over-year. We saw excellent strength in our AI/ML business with revenue growing north of 100% year-over-year. Revenue from our Scalers+ customers or customers who were at $100,000 plus annual run rate during the quarter continued to see strong growth during the quarter at 35% year-over-year and increased to 24% of total revenue. Finally, we achieved incremental ARR in the second quarter of $32 million, our highest incremental ARR since Q4 of 2022 and the highest organic incremental ARR in over 3 years. Given our strong top line performance in the first half of the year and our confidence in the second half outlook, we are raising our full year revenue guidance range to $888 million to $892 million. We are also excited about the traction we are getting with larger customers and increase in committed contracts. I spoke last quarter about a multiyear $20 million plus committed deal, and this was a contributor to the material growth in our remaining performance obligation balance as we continue to seek and secure large multiyear deals with our higher spend customers and key strategic partners. Not only did our momentum carry over to the second quarter, but also the growth to come with -- the growth continued to come with healthy profitability, including adjusted free cash flow of $57 million, which is 26% of revenue. As a result of this performance, we are raising our full year free cash flow guide to 17% to 19% of revenue, demonstrating our ability to accelerate revenue while maintaining attractive free cash flow margins. Turning to the balance sheet. We continue to make progress on our capital allocation priorities and remain on track to address the outstanding 2026 convertible debt prior to the end of this calendar year. Matt will go into further details on this front in his prepared remarks. Now let me give you some updates on the product innovation that we continue to deliver for our digital native enterprise customers, which you can see highlighted on Slides 11 and 12 in the earnings presentation. During the quarter, we released more than 60 new products and features addressing the needs of our higher spend customers, which includes builders, scalers and Scalers+ customers who now drive 89% of our revenue. Notably, 64 of our top 100 customers have adopted a product or a feature released within the last year and 26 of the top 100 customers have adopted a new capability released within the last quarter, both clear proof points of the impact product innovation is having on our digital native enterprise customers. Let me now provide a few product highlights from the quarter starting with core cloud. This past quarter, we officially announced our Atlanta data center and its resources are now available to all customers. As a reminder, this is our newest and largest data center and it is purpose-built to deliver high-density GPU infrastructure optimized for AI inferencing, which requires a lot more than just GPUs. This data center has our core cloud stack, including compute, storage and other cloud features that are critical to enabling AI native customers to run full stack applications powered by AI and not just the training or inference part of their software. This agentic cloud data center infrastructure is a key differentiating factor for us over other neo clouds as it provides a complete stack for running sophisticated AI applications that have comprehensive needs beyond GPUs. More on that a little later. During the quarter, we continued to build capabilities for larger digital native enterprises. These customers typically require high-quality storage, especially for AI workloads. To support that requirement, we enabled NFS or network file systems for GPUs so that customers can run the most demanding GPU applications with access to higher performance object storage to meet the demands of enterprise workloads such as video streaming and data lakes. We also introduced 2 advanced networking features in public preview, Bring Your Own IP address or BYOIP and Network Address Translation gateways or NAT gateways. These are critical capabilities that will enable more and larger digital native enterprise workloads to migrate to this solution. BYOIP allows customers to use their existing publicly routable IP addresses on DO rather than having to acquire new distillation specific IP addresses. This makes it easy for customers to lift and shift their workloads to our platform without requiring extensive changes to their applications, while NAT gateway allows the customers' resources to securely access the Internet from within their virtual private cloud on the DO platform. These innovations on the core cloud platform are enabling us to scale and win more workloads from our digital native enterprise customer base. To leverage that traction, we are complementing our industry-leading product-led growth motion with a small dedicated migrations team to support customers moving existing workloads from hyperscalers and other clouds to DigitalOcean's platform, and we facilitated 76 of these migrations during the quarter. One example of this is a company called Xcitium, a next-generation cybersecurity provider delivering innovative, no-cost incident response as part of its fully managed security operations center or SOC offering. Designed for businesses and managed service providers, or MSPs, Xcitium's managed SOC provides real-time threat detection, threat hunting and incident response, all without the high cost typically associated with legacy solutions. Xcitium signed an 18-month contract with DigitalOcean selecting the platform to migrate from other cloud providers due to our compelling total cost of ownership, performance and ease of use, enabling Xcitium to deliver its cutting-edge cybersecurity solutions more efficiently and at scale. Servd.host, a Scalers+ customer that offers managed hosting specifically tailored for the craft content management system has already adopted our newly released network address translation gateway, enabling their customers to securely access the Internet within their DigitalOcean Virtual Private Cloud. We're also very excited about the progress we are making on our AI/ML platform, which we now call the DigitalOcean Gradient AI Agentic Cloud, which complements our full stack general-purpose cloud. Slide 8 in the earnings presentation shows the power of having these 2 platforms side-by-side, enabling our customers to take full advantage of the integrated stack that is required to build and run AI-powered applications in the future. The Gradient AI Agentic Cloud has 3 components: Gradient AI Infrastructure, Gradient AI Platform and Gradient AI Agents. Let me start with the Gradient AI Infrastructure, where we expanded our GPU Droplets lineup significantly to now include 8 major types, including the H, L and RTX Series GPUs from NVIDIA and the latest Instinct series GPUs from AMD. Another major update that makes Gradient AI Infrastructure great for inferencing is a new inference optimized GPU Droplet, which simplifies the setup and deployment of LLMs by leveraging docker and this new GPU Droplet comes preconfigured with vLLM and includes built-in optimizations like multi-GPU parallelism, smart batching, faster and higher token generation built in support for Hugging Face model downloads, speculative decoding, prompt caching and multi-model concurrency so that customers can go from deployment to serving tokens in minutes on any GPU Droplet without having to do all these steps manually. We recently announced a collaboration with AMD that provides DO customers with access to AMD Instinct MI325X GPU Droplet in addition to MI300X Droplet. These GPUs deliver high-level performance at lower TCO and are ideal for large-scale AI inferencing workloads. Another example of this growing collaboration between the 2 companies is the Gradient AI Infrastructure powering the recently announced AMD Developer Cloud, which enables developers and open source contributors to test drive AMD Instinct GPUs instantly in a fully managed environment managed by our Gradient AI Infrastructure. This enables developers to start AI development with 0 hardware investment and accelerate the time to value in tasks like benchmarking and inference scaling. This further advances our mission of democratizing access to AI while maintaining the quality, performance and flexibility our customers have come to expect from DO. Let's look at how customers are taking advantage of our Gradient AI Infrastructure. Featherless.ai is a serverless AI inference platform, offering API access to an expansive and growing catalog of open weight models, primarily Hugging Face models like Llama, Mistral, Qwen, DeepSeek, RWKV and more. Featherless.ai leverages DigitalOcean for its simplicity and price performance, and they were an early adopter of our AMD MI300X GPU Droplets, which offer industry-leading price performance and ease of use for inference workloads. Another GPU Droplet customer is ScribeAI, a native -- a digital native enterprise specializing in AI-generated documentation, which is used by 94% of the Fortune 500 companies. ScribeAI migrated their AI/ML training workloads to DigitalOcean from competitive cloud providers and is now leveraging DO's GPU Droplets to build and train their process documentation and knowledge sharing platform. Moving on to the next layer of our Gradient AI Agentic Cloud. We recently announced the general availability of DigitalOcean's Gradient AI Platform, which provides the industry's easiest and most cost-effective platform for developing production-grade AI agents with automated safety and security guardrails. The Gradient AI Platform, as shown on the right side of Slide 8 of the earnings deck, is a one-of-a-kind platform that caters to the end-to-end agent development life cycle or ADLC for short, enabling AI native, SaaS and any software application customer to build, test, deploy, monitor and operate agentic AI software. Customers can use a rich set of proprietary and open source foundation models, including OpenAI, Anthropic, Mistral, DeepSeek and Llama as high-performance serverless endpoints. These serverless endpoints automatically scale to meet real-time application demands, thus freeing customers from having to manage compute resources on their own. The Gradient AI Platform provides built-in guardrails that verify AI behavior and new best-in-class agent evaluation framework to drive high accuracy and relevance of AI results and a robust experimentation capability to deliver optimal AI performance. Over 14,000 agents have been created since announcing this platform, which is almost double the number of agents last quarter. More than 6,000 customers have leveraged this platform since January with 30% of these customers being new to DigitalOcean. One of the customers leveraging our new Gradient AI Platform is Quickest with a Q, a leading AI-powered collaborative workspace product that helps product marketing and sales teams generate strategy documents, campaigns and playbooks using shared AI personas. Quickest leverages the Gradient AI Platform to create persona-generating agents, enabling model comparisons and orchestrating tasks on the Gradient AI Platform to fetch and summarize the markdown content. Quickest chose DigitalOcean because they needed a flexible and scalable infrastructure to support complex AI workflows, and they value the simplicity of deploying agents and integrating them to the Quickest product line with very little coding involved. Moving on to the Gradient AI Agents layer. Our first commercial AI agent is the Cloudways Copilot, which continuously monitors critical server components like the web stack, disk space, [ iNote ] and [ host help ] to detect issues in real-time, diagnose root causes and deliver actionable recommendations faster than traditional alerting systems. An example of a customer leveraging this product is Mint Media, a full-service media and marketing company specializing in video production and digital marketing. Mint Media uses our Cloudways Copilot Gen AI Agents to automatically detect and remediate web posting issues. Mint Media manages over 180 websites and saw significant time savings by leveraging Cloudways Copilot and the associated AI-powered insights and automated issue resolution. What previously required hours of manual debugging is now handled in minutes through the Agents' detailed actionable recommendations. In addition to the product innovations we delivered, we also made material progress on the go-to-market front during this quarter. From a new customer acquisition perspective, we saw meaningful progress in the top of the funnel from our product-led growth enhancements with revenue from core cloud customers in their first 12 months significantly outpacing growth of prior years, which is a great leading indicator of future growth potential. Our direct sales motion and the strong ecosystem partnerships are driving more AI native customers with large-scale inferencing requirements than we have ever seen in the past. Our growing success with these marquee customers is evident in the increased RPO that I mentioned earlier in my comments, and we anticipate this trend to continue as we scale out our AI capabilities. In closing, I'm pleased both by the results of the second quarter and by the progress we are making on the strategy that we articulated at our Investor Day back in April. We maintained our top line growth momentum from Q1 to Q2, while maintaining healthy profitability metrics, enabling us to raise our guidance across both revenue and profitability metrics for the fiscal year 2025. We delivered continued product innovation and both drove improved performance in our industry-leading product-led growth engine and continue to get traction with our direct sales go-to-market motion, especially for AI. We recently launched the Gradient AI Platform into full general availability, a significant step in our offering to our customers, a twin stack of cloud capabilities as outlined on Slide 8 of the earnings slide deck. In a single unified stack, we provide a mature, complete general-purpose cloud and on the other stack, a modern agentic AI cloud. These integrated stacks enable AI native customers to run inferencing at scale while taking advantage of the core cloud modules and digital native customers to build AI directly into their software applications without having to do the heavy lifting of dealing with AI infrastructure. With this unique twin cloud and AI stacks, we are getting increasing momentum with AI native companies with larger scale inferencing workloads and our -- we are expanding our partnerships with key ecosystem players in the AI domain. We are also making good progress on our balance sheet and refinancing priorities, positioning us for a strong 2026. Thank you, and I'll now turn it over to Matt.
W. Matthew Steinfort:
Thanks, Paddy. Good morning, everyone, and thanks for joining us today. As Paddy discussed, we are very pleased with our Q2 2025 performance, and we are confident in our ability to sustain and build on this momentum in the latter half of the year. In my comments, I'll walk through our Q2 results in detail, provide an update on our balance sheet and capital allocation strategy and share our third quarter and full year 2025 financial outlook. Starting with the top line. Revenue in the first quarter was $219 million, up 14% year-over-year. Our annual run rate revenue, or ARR, was $875 million, which was $32 million above Q1. This incremental ARR of $32 million was the highest incremental ARR since Q4 of 2022 and the highest organic incremental ARR achieved in over 3 years. We continue to build and strengthen our relationships with our higher spend customers and key strategic partners. This is evidenced by the material increase in our remaining performance obligation balance as we continue to secure large multiyear deals, with our digital native enterprise customers, which is an early but promising new go-to-market motion for the company. Our product innovation and go-to-market enhancements are resonating with this target customer base. In Q2, revenue from our Scalers+ customers or customers whose annualized run rate revenue in the quarter was greater than $100,000 and who represent 24% of overall revenue, grew 35% year-over-year with a 23% increase in customer count. This is clear evidence of the increasing traction that we are getting with our largest customers as they expand their use of our core cloud products and adopt our new AI offering. Q2 revenue growth was primarily driven by improvements in customer acquisition across both core cloud and AI as well as strong customer adoption of our AI/ML products. As Paddy mentioned, revenue from core cloud customers in their first 12 months significantly outpaced growth [ in ] prior years, which is a great leading indicator of future growth as these stronger recent cohorts not only drive up revenue from customer acquisition, but also they should positively contribute to net dollar retention when they reach their 13th month and become part of our NDR cohort. Our Q2 net dollar retention was 99%, up from 97% in the same quarter last year and within the expected range that we communicated on the prior quarter's call. We also delivered strong AI/ML revenue growth in Q2 as we continue to see a robust demand environment, particularly for inference workloads, with AI revenue growing north of 100% year-over-year. Turning to the P&L. We delivered strong performance on all of our key profitability metrics. Gross margin for the second quarter was 60%, which was 100 basis points higher than the prior year. Adjusted EBITDA was $89 million, an increase of 10% year-over-year. Adjusted EBITDA margin was 41% in the second quarter, approximately 100 basis points lower than the prior year. Non-GAAP diluted net income per share was $0.59, a 23% increase year-over-year. This increase is a direct result of expanding per share profitability by driving durable revenue growth while exercising ongoing cost discipline. GAAP diluted net income per share was $0.39, a 95% increase year-over-year as we continue to grow revenue, drive operating leverage and prudently manage stock-based compensation. Q2 adjusted free cash flow was $57 million or 26% of revenue, up significantly from our front-loaded Q1, which included a large portion of the upfront investment required to bring the Atlanta data center online. As I'll detail later in my comments, we remain confident in our ability to deliver attractive adjusted free cash flow margins for the full year although the timing of capital investment payments will continue to create quarter-to-quarter variations in adjusted free cash flow margins, hence, our highlighting of the trailing 12-month adjusted free cash flow margins on Slide 15. Our balance sheet continues to be strong as we continue to maintain material cash and cash equivalents and ended the quarter with $388 million in cash. We also continued to execute our share repurchase program in the quarter with $20 million of repurchases in Q2, buying back approximately 691,000 shares. This brings our cumulative share repurchases since IPO to $1.6 billion and 34.8 million shares through June 30, 2025. At the end of Q2, we had $3.4 million remaining on our current share repurchase authorization. On the debt front, we continue to actively evaluate the market and our financing alternatives and remain committed to fully addressing the 2026 convert over the balance of this calendar year. We have multiple attractive financing options available to us, including convertible debt, bank debt and bonds. And we plan to tap into these markets as needed to optimize our long-term cost of capital. Before we move on to guidance, I'll highlight one noncash item related to both the balance sheet and the P&L. We continue to evaluate the necessity of our valuation allowance on certain existing tax -- deferred tax assets each quarter in accordance with U.S. GAAP. While the valuation allowance is still necessary for Q2, in the latter half of fiscal 2025, we may release all or a portion of our valuation allowance of $109 million, which was discussed in our most recent 10-K as well as in our most recent 10-Q. When released, we estimate this would have the financial impact of decreasing our noncash tax expense by the amount of the release, resulting in a corresponding increase in net income. When this occurs, it will be a positive noncash event and will have no impact on non-GAAP financial metrics. Moving on to guidance. For the third quarter of 2025, we expect revenue to be in the range of $226 million to $227 million, representing approximately 14.1% year-over-year growth at the midpoint. For the full year 2025, we are raising our annual revenue guidance to the range of $888 million to $892 million, representing approximately 14% year-over-year growth at the midpoint. Given our strong Q2 performance, visibility into our customers' usage trends and the strength of the AI/ML demand environment, we are able to raise our full year guide with confidence. For the third quarter of 2025, we expect our adjusted EBITDA margins to be in the range of 39% to 40%. For the full year, we raised our adjusted EBITDA margin guide to the range of 39% to 40%. For the third quarter of 2025, we expect non-GAAP diluted earnings per share to be $0.45 to $0.50 based on approximately 102 million to 103 million in weighted average fully diluted shares outstanding. For the full year 2025, we expect non-GAAP diluted earnings per share to be $2.05 to $2.10 based on approximately 103 million to 104 million in weighted average fully diluted shares outstanding. Turning to adjusted free cash flow. We raised our guided adjusted free cash flow margins for the full year to 17% to 19%, increasing our projected cash flow margins, at the same time, we are accelerating our revenue growth outlook, which speaks to the confidence we have in our ability to maintain attractive free cash flow margins while we accelerate our top line growth. Consistent with our historical guidance practice, we are not providing adjusted free cash flow guidance on a quarter-by-quarter basis, given it is heavily influenced by working capital timing as you saw in our year-to-date results. That concludes our prepared remarks, and we'll now open the call to Q&A.
Operator:
[Operator Instructions] Your first question comes from Patrick Walravens with Citizens.
Patrick D. Walravens:
Congratulations, Paddy. Could you talk a little bit more about the AI/ML revenue and the over 100% increase there? And maybe walk us through a little bit the history of this offering and why the current version is really starting to kick in?
Padmanabhan T. Srinivasan:
Yes. Thank you, Patrick. Good way to get started. So the AI/ML revenue, as I mentioned in the call, grew more than 100% year-over-year. So if you remember, last Q2 is when we brought a lot of H100 NVIDIA gear online. So more than doubling that this quarter was a significant step for us. And what is different is, as I explained, we have a 3-layer AI stack. On the foundational level is our Gradient AI infrastructure stack, which is a network of GPUs, both from AMD as well as NVIDIA. And then in the middle layer is our Gradient AI platform that we just took from private and public preview all the way to general availability. And then on the topmost layer is agents. So the type of customers that use these 3 layers are slightly different at this point. So AI infrastructure is consumed typically by AI-native companies that have their own model or have taken an open source model and are doing some tweaks to it and hosting those models and scaling them, especially in the inferencing mode are typically consuming the AI infrastructure. And a majority of our revenue comes from the Gradient AI infrastructure stack. And that's not very dissimilar from the rest of the industry. The Gradient AI platform that we recently pushed out to GA is where any software application like a SaaS provider, for example, can start consuming AI into their own applications without having to do the heavy lifting of building and managing their own GPU infrastructure. So we have serverless endpoints for these LLMs, for example. And we have a bunch of other tools and modules that are critical building blocks for consuming AI into your own application. So it becomes very, very easy to build AI into your existing applications. And that's what is powering the growth of our AI revenue is predominantly on the infrastructure side, but we are driving a lot of adoption and mind share with developers with the AI platform. And on the Agent layer, the first commercial application of that is the Cloudways Copilot that's typically adopted by end customers as a way to automate some of the manual tasks that we are seeing in managing and operating cloud-based applications.
Operator:
Your next question comes from the line of Mike Cikos with Needham & Company.
Michael Joseph Cikos:
Just to further the conversation on the AI/ML. Good to see the north of 100% revenue growth, reflecting some of the more recent trends you guys have seen on the ARR front. But I just wanted to see -- I know historically, you guys have given us more color on the underlying components for that net new ARR. I think last quarter, you guys had cited north of 160% year-on-year. Maybe I missed the data point, but just wanted to see how that net new is growing on the AI/ML front in the June quarter.
W. Matthew Steinfort:
Mike, it's Matt. I think what we said is that our ARR was growing. AI ARR was growing north of 160% in prior quarters that wasn't referring to the incremental ARR. It was the actual ARR. And the north of 100% reflects still very strong growth. In fact, if you look at the incremental ARR for this quarter at 32%, it was a good balance across both AI and core cloud, but it was our highest incremental ARR in the company's history. And the reason that it dropped, this is where you were going with the question around from 160% to north of 100% is just as Paddy had said, we lapped the Q2 when we launched all of our AI capabilities, and we had a bunch of pent-up demand. So the Q2 growth in the AI business, in particular from last year was high. So it's just a difficult comp. But if you look at the incremental ARR that we're adding for the -- in that business on a go-forward basis, we're accelerating. It's an accelerating business.
Michael Joseph Cikos:
Got it. And for the NDR, I know that the 99% here is in keeping with that commentary you guys have provided last quarter. Can you just explain what actually acted against that? Because I would have thought there would have been at least some benefit from you guys lapping that Cloudways price increase in April.
W. Matthew Steinfort:
Yes. I think that when we look at the NDR, and this is the reason that we signaled it will likely bounce around the kind of current range into this quarter and probably going for the next couple of quarters is that with the -- in the market, we haven't seen a degradation market. We haven't really seen any change in the market since the April time frame. But as we look at our -- some of our larger customers in the long tail, there's -- I'd say, there's a mix impact on customers. It's very individual. So some customers, we see that are maybe on edge and they're optimizing or they're a little bit hesitant to expand their business. But in the same industry or in the same size of customer, we also see a number of customers that are accelerating the business. They're doing really well and they're expanding their business with us and they're growing their workloads. And you see that in the growth of the customers, the Scalers+ at 35%. So we're seeing really strong growth in parts of our customers, but we're also seeing others that are being cautious and aren't scaling as fast. And so we think that we're likely to stay kind of in this level. I'd say what the good news is despite the fact that the NDR was just a hair lower at 99%, we were able to raise our guidance. We're delivering the best incremental ARR that we've delivered in a very long time. And so we're very encouraged by the trends. I think that NDR is still -- it's such a lagging metric. It's going to be a little stubborn to improve, but that's not going to slow us down from a revenue growth standpoint. We're doing enough with the new product acquisition on the core cloud, which is doing really, really well, getting really good cohorts that are coming in. We've got the migration motion, which is a relatively new motion. It doesn't always impact NDR, and then we've got the growth and acceleration in the AI business. So we're very bullish on the growth prospects, and that was what enabled us to raise the guidance for the year.
Operator:
Q - Gabriela Borges:
Gabriela Borges:
I wanted to touch on the unit economics of the AI business. Matt, I know in the past, you've talked about the 3-year payback period, but you'll have both been very consistent in saying, as you move from bare metal GPUs to more differentiated services, exactly as you've illustrated in the graphic in the slides, you should be able to command more gross margin essentially. So maybe give us an update on how those efforts are tracking? How do you feel about the gross margin and the LTV CAC of the AI business relative to the core business?
W. Matthew Steinfort:
Yes. We're very -- we are very encouraged and comfortable with the margins that we're getting in the AI business, as you said, Gabriela, the higher layers of the stack, the 3-layer stack that Paddy describes have better margins than pure infrastructure. But even at the pure infrastructure level, we're very comfortable with the returns, particularly given the long-term value that we believe -- you talked about the LTV, the long-term value that we believe we will generate from those customers. As Paddy has talked about multiple times, inferencing customers, which is what we're seeing more and more of even at the infrastructure layer as we're kind of going through this, they will pull other cloud services through. They need databases, they need storage, they need bandwidth, they need standard compute CPU. And so this is a bit of -- we're still investing ahead in terms of if there's a bunch of infrastructure, the margins on that are lower than the margins at the higher stacks, but you need that baseline infrastructure capability to get the higher layer services. And so we think it's a very good investment. It's a very good use of our capital. And we're very encouraged by the returns that we're getting and the promise of higher returns as that business matures and we get more pull-through revenue and we get more of the revenue shifting to the higher layers of the AI stack.
Padmanabhan T. Srinivasan:
And just to add to it, Gabriela, this is Paddy. Just to add to what Matt just said, that's why we are also forward investing in making our Gradient AI Agent Cloud very, very optimized for inferencing. So I talked about our inference optimized Droplet. If you look at that right side of the Slide 8, you will also see that we are investing in model optimization. We are investing in infrastructure optimization at the infrastructure level. Everything is aimed to scale inferencing workloads on our platform, which tend to be -- which tend to have very long tails. And as Matt mentioned, they also drag through some of the other cloud primitives. So they dragged the left side along with them as the inferencing workloads scale globally. So we feel very good about where we are and some of the early success we are seeing with very marquee customers that are starting to scale up their inferencing footprint on us.
Gabriela Borges:
Yes. That makes sense. And Paddy and Matt, the follow-up I have here, just on these comments on highest incremental ARR, highest organic ARR in over 3 years in terms of the net new that you're adding. Can we think of this as the new high watermark? And looking at what's being implied in guidance, talk to us about your ability to consistently deliver growth of that metric and whether there's any unevenness whether because of seasonality or company-specific factors like the timing of new AI capacity coming online that we should be aware of as we think about the forward model.
Padmanabhan T. Srinivasan:
Yes. I can start, Matt, and you can fill in. So we did not have anything unnatural this last quarter. Like we didn't bring a bunch of capacity online or there was no seasonality associated with it. I think we are just -- as we mentioned in our prepared remarks, we are honing our product-led growth motion for our core cloud customers, and that is starting to really produce results on one hand. Our migration motion is bringing in a new type of customers that are typically digital native enterprise customers, and we are starting to grow them. And on the AI side, we're just starting to see some scaled up inferencing customers. So it's a combination of all of those. It's just not one big contract or one spike in capacity of GPUs or anything like that. It's a very secular and durable type of momentum that we are seeing on the new customer acquisition side. Matt?
W. Matthew Steinfort:
I agree with all that, Paddy. I think that, again, the reminder on ARR, it's not based on a booking. It's not based on the sale. It's based on actual customer revenue and customer utilization. And so it's -- I think we hope that, that's a steady predictor going forward of the exit trajectory that we're on and a good indicator. So it's certainly a critical metric for us. And as Paddy said, we're encouraged by our ability to increase that. Certainly, it will -- like any metric will vary quarter-to-quarter. I know that it will always be up and to the right, but we have enough motions going that we're very confident in our ability to improve that metric.
Operator:
Your next question comes from the line of Raimo Lenschow with Barclays.
Raimo Lenschow:
Perfect. Staying on that AI notion and inferencing, like what is -- if you think about like Paddy, you talked about like your -- how you try to differentiate there, et cetera. Where is the industry at the moment in terms of also capacity constraints? Like is that still a factor for you that it's helping? Or is it really now about all differentiation? And then I have one follow-up on that.
Padmanabhan T. Srinivasan:
Thank you, Raimo. Capacity constraints are a way of life in AI as we are scaling like everyone else. So we are trying to stay ahead of it a little bit, but it's -- there are just so many factors there in terms of the real estate footprint and the power and the cooling and the actual gear. So there's just a lot of variable factors here. But I think for us, it all boils down to why some of these marquee AI-native customers are starting to choose us over the other alternatives that they have. And it is really the twin stack cloud that we have laid out in Slide 8. So I don't think there are too many cloud providers that can claim to have both sides of that equation. And we certainly feel like we are driving home that point in terms of not only offering a world-class AI infrastructure, but increasingly, those same customers are also starting to leverage some of the guardrails and the agent evaluation framework and the agent observability and things like that, going up stack on the right side of the agentic cloud. But also, as Matt mentioned, they also have very sophisticated storage, data processing and CPU compute requirements as well because at the end of the day, these are very sophisticated applications that require the might of a full-stack general-purpose cloud. So I think that is the differentiator that we are leaning on, and we feel really confident. I've been talking about this for about 4 quarters. And finally, we have the twin stacks that we have described on Slide 8 of the earnings deck. And we feel really good. We're just getting started and some of the RPO and the large contracts that we have been talking about, they have not even started hitting their full stride as we are scaling those customers. So we feel really good about the forward momentum that we are building.
Raimo Lenschow:
And that kind of leads into my next question for Matt. Like if I think about second half, I got a good few questions already of people saying, well, actually, you're kind of raising probably -- we're raising a little -- the full year by more than you actually beat in Q1, Q2. So there's kind of obviously a lot of kind of confidence in the second half. Should we think about like more RPO gives you more visibility, which kind of drives kind of some of that guidance because we know you as a conservative person normally.
W. Matthew Steinfort:
But I wish that it was all the RPO that was giving us full confidence. If you look at the RPO, while we're really encouraged by the increase, it's still a very, very small portion of our business. So that's certainly encouraging. But I'd say, when we look at the performance that we had in the first half, we look at the visibility that we have into the customer usage patterns. We look at the migrations that we're seeing in that motion kind of coming. We look at the traction we're getting with AI and with some of the -- through some of the direct sales and partnerships. And some of the conversations that we articulated we're having with large AI native companies. We just -- we have enough irons in the fire that we're confident of increasing the revenue guide. And what to me is most encouraging because as you do know I am a relatively conservative guy is that we're able to increase our free cash flow margin at the same time. And so to me, that we can demonstrate that we can grow revenue, we can accelerate revenue while maintaining attractive free cash flow margins. And to me, that's incredibly encouraging as we think about the -- what's in front of us in the second half and how that sets us up for 2026.
Operator:
Your next question comes from the line of Jason Ader with William Blair.
Jason Noah Ader:
I just wanted to see if you could give us a little bit of a breakdown of the business right now when we think about the kind of AI side versus the non-AI side. I know you've given the growth rates. Can you tell us just sort of ballpark? Is this like -- I know I'm kind of -- I'm in the neighborhood of like 5% to 10% of revenue now from AI. I don't know if there's any specificity you can give on that, but that would be really helpful.
W. Matthew Steinfort:
Jason, we're not -- as you know, we don't break this out. And part of it is because of the -- we believe that a lot of the AI capabilities are going to be pulling through other capabilities. And so that the impact is -- of the growth is beyond what's represented if you just kind of wrote down the SKUs that we consider AI. But you're in the ballpark. I'd say it's increasingly becoming a material chunk of the business. It's still small because it's a business that we just launched a year ago, and we're accelerating, but that's a reasonable ballpark for percentage of revenue. And we expect that to increase and it will become an increasingly meaningful portion of our business in 2026, but it will still be a small portion. The core cloud is still a very healthy and growing portion of our business, and the AI business is a great complement to that and is accelerating our growth and also opening up different kind of entire channels and new customers to bring in that will drive that core cloud growth up as well.
Jason Noah Ader:
Okay. Great. And then just as a quick follow-up, is it fair to assume that the core cloud business was -- grew at a similar rate in Q2 versus Q1 in that kind of low double digits. Is that accurate?
W. Matthew Steinfort:
Yes. We still see momentum in the core cloud business. And while the NDR was a little bit lower in Q2 than it was in Q1, the revenue that we're getting from new customers is ahead of our plan and our expectations. We're doing a really good job there. And again, you got to remember, NDR is a little wonky lagging metric because what happened like the change in revenue from a year ago has as much impact as the change in revenue this year. So the core cloud business continues to accelerate. It's in that low double-digit growth rate and is improving.
Jason Noah Ader:
So most of the upside then was from new customers, it sounds like.
W. Matthew Steinfort:
Yes, correct. Yes. Because I mean with NDR coming down a little bit, the new customer acquisition plus the growth in AI offset the slight headwind from the NDR. But again, if you look at the incremental ARR, if you look at it on an exit run rate standpoint, there was a very good balance between the core business and AI. And so you both saw AI at its highest point, but there was still very good core cloud growth on an incremental ARR as well.
Operator:
Your next question comes from the line of Josh Baer with Morgan Stanley.
Joshua Phillip Baer:
Just wanted to confirm that in the net dollar retention rate, AI and ML revenue is not in that metric. Is that right?
W. Matthew Steinfort:
That's right, Josh. That is still the case and will likely be the case for a while. As we've talked about it internally, and when we talked about Investor Day, we said it will eventually contribute to the NDR, and we still believe it will. It will likely be for more inferencing workloads where they're steady production workloads. They're not projects where someone comes in test something for a month and then kind of scales it back. And so if you think about the time lag of someone being in NDR, they don't -- the customer doesn't count even in our core cloud until they're 13th month. And so if you're turning up inferencing workloads now with marquee customers, it will be a year before they would even hit NDR. So it will be -- we'll incorporate at least the inferencing portion of AI at some point, but it's certainly not going to be in the next couple of quarters. So it continues to not include -- NDR continues to not include AI.
Joshua Phillip Baer:
Okay. Got it. Yes, I would think like especially now as it's scaling, but also you have more than 12 months, you talked about 100% growth off of the Q2 last year where there was AI revenue and it's all organic kind of missing piece to that NDR percentage just around that expansion from existing customers. I did want to ask you about the large deals. Like how we should be expecting the potential for large deals in the future? And then also for you, Matt, how you're thinking about it from a guidance perspective, assuming that would be a little bit lumpier or have longer sales cycles or it's just a new motion for you guys, how do you incorporate the potential for large deals in guidance?
W. Matthew Steinfort:
Do you want to start and just talk about the nature of the large deals, and I can talk -- can answer Josh's question about the guidance.
Padmanabhan T. Srinivasan:
Yes. So the nature of large deals is a very new muscle for us, both from a sales, business development, forecasting all of the above. I think what we are driven by is can we make these customers successful. And do we have enough of a technology edge that can attract and retain and get these customers to scale. And that's the #1 thing that I'm focused on that Bratin and Larry are focused on is making sure that we have the ability to articulate our technology differentiation in a durable fashion and have the right engineering expertise on the ground to make these customers successful. So I feel fairly encouraged by a couple of early successes that we have had. And hopefully, we can -- and we see enough in the pipeline to be quite encouraged with these kinds of deals. Now with inferencing, it just takes time to go from winning a customer deal to actually scaling that up with real-world traffic. So we are in the process of doing that with some of our customers. And extrapolating that into the future, we'll see how we can do a more predictable job in terms of forecasting how these things fall, but I expect this to be lumpy and spiky in the beginning before it starts normalizing because our customers are also new to this. And they get sudden spikes based on some new updates to their models or new updates to their software. Some of them are in the consumer AI space. Some of them are in the B2B AI space. So we are learning along with them and they're learning with us in terms of their business model and how it is scaling out. So I'll let Matt answer how we will start reflecting these things in our financials.
W. Matthew Steinfort:
With that context, Josh, as you would expect based on our track record and our history, we'll be conservative in forecasting those. I mean the good news is, as Paddy said, we book revenue when we get that revenue, it's not like we're signing massive deals that just turn on right away. So we have visibility into the ramps and how those customers are going. But given it's such a new motion and given some of those kind of the newness of it for both us and the customer, as Paddy described, we'll be conservative in terms of including any projected revenue from large deals until we're very comfortable that the things are on the right track, and we're growing and we have good visibility into that growth. So I would expect that you would continue to see us be conservative as it reflects that -- it relates to any large deals reflected in our forecast.
Operator:
Our next question comes from the line of James Fish with Piper Sandler.
James Edward Fish:
You keep using the word conservative here. But on the guide side, we haven't seen this level of second half step-up in some time, really going back to the pandemic. And you guys deserve credit here doing $32 million of net new ARR organic. But can you just walk us through the linearity you are seeing what you're expecting from some of the newer solutions in the second half to raise guide by this much? And any of the other moving parts that helps you bridge this kind of larger-than-normal step-up here? Because if I look at this and say you book similar kind of to slightly better net new ARR in the sort of $30 million to $35 million range over the next 2 quarters, it really doesn't leave much wiggle room based on how you guys are defining ARR versus revenue now.
W. Matthew Steinfort:
I think, Jim, it's a good question. Recall in the last quarter, we didn't raise guidance. We beat Q1, we didn't raise the guide for Q2, and we did that intentionally because the market had changed pretty dramatically and we just didn't know what was going to happen from a macro standpoint. We've now got a full quarter under our belt on that front. We feel good about the visibility we have with the core customers. We've got a bit of the beat from the first quarter and then the beat in the second quarter to pass through. But as I said, we have enough levers at the moment that we're confident in. We've got the revenue from new customers, the month 1 to month 12, that's doing very well. And that's relatively stable and predictable. Like it's -- we're seeing increased volume. We're seeing increased conversion. We're seeing better customers in that cohort. And that's a fairly durable kind of improvements that we've made. And so we're really confident in that. We've got that migration motion that we've turned up that Paddy talked about 70-something migrations during the quarter. That's a very new motion for us, but we've got clearly a pipeline of those because those aren't things that you just -- like somebody comes in one day and you turn on a migration, you have to be talking to the customer for a period of time. So we're managing a pipeline around that. We also have very good visibility into our AI pipeline and are getting increasing traction there. So we've got enough things that are going that give us confidence to be able to deliver on that. And as I said in the prior question or the answer, we haven't fully reflected the large deal potential in the guide that we have, and that certainly gives us the upside potential beyond what we've even -- we're talking about. So we feel good that we're confident in the base, confident enough to raise the guide and that there's still other things we can be doing and progress we could be making over the balance of this year to give us further room.
James Edward Fish:
Got it. And then, Paddy, maybe for you, can you talk about what you're seeing on sort of the GPU pricing dynamic as it seemed like across the space, pricing came down a little bit and how you're thinking about the ability to repurpose any GPUs that kind of migrate from customer to customer or what you're seeing in terms of utilization at this point across the GPU side?
Padmanabhan T. Srinivasan:
Okay. Thank you, Jim. The utilization is very robust. We are running very lean on our GPU fleets regardless of the generation of GPUs we are talking about. As we become more and more heavy on the inferencing side, it gives us a lot of degrees of freedom in terms of how we allocate the machines. And typically, what we are seeing with our inferencing customers is, yes, they do care about the generation of GPUs, but they care more about the price performance rather than just the raw throughput of any given generation of technology. So let's say, you have 100 units of GPU on the current generation. And if we can deliver the same price performance with 90 units of GPU in the next generation, the customer really doesn't care as long as it's in the same family of GPUs and they don't have to reengineer or do anything. So there -- so we are getting to a point where it's more about the price performance rather than the price alone or the performance alone. So that gives us a lot of degrees of freedom in terms of how we allocate which family of GPUs across our inference workload customers. And I think this is going to get even more important as we start scaling up many of our customers across geographies and start doing this in multiple data centers. So A lot of new things to be figured out there, but the pricing dynamics in training workloads are quite a bit different from the ones that we are experiencing in stack that is predominantly driving inferencing.
Operator:
We have time for one more question, and that question comes from the line of Brad Reback with Stifel.
Brad Robert Reback:
Matt, as we think about gross margin for the back half of the year as the revenue mix maybe shifts a little bit and you continue to invest in the CapEx. How should we think about the trajectory? And then heading into next year as you lap the change in useful life, what type of impact should we expect then?
W. Matthew Steinfort:
The gross margins are -- we expect to be relatively consistent at current levels over the balance of this year. But again, as you said, the AI business is -- it's growing fast, but it's still a small part of the business. So it's not going to have a material impact on gross margins. If you roll that out to next year, clearly, we're not at the point ready to give guidance, but we would expect it to have kind of a modest headwind to gross margins, but it's still going to be in the vast majority of our business is going to be at the same high margins that we have. And we continue to drive efficiencies in the core business, bandwidth optimization, the longer-term data center optimization strategy that we have. And so we're confident that we can maintain kind of healthy gross margins in the realm that we have right now. And if AI becomes a much, much bigger portion of our business, you'll clearly have visibility into that as we do. And at that point, you would see a little bit of margin pressure. But at this point, the gross margin we expect to stay right around where it is through the balance of the year.
Operator:
Your next question comes from the line of Mark Zhang with Citi.
Mark Zhang:
Maybe just want to dig a little bit more into the RPO performance, very nice to see. But can you give us a sense of maybe the characteristics here. What are the average deal sizes, contract durations? And I just wanted to confirm that AI was the leading contributor here? Or you saw good contribution from Core Cloud as well?
W. Matthew Steinfort:
I was saying from starting the reverse. So it was -- the increase in RPO was from both core cloud and AI. So it wasn't just AI. Clearly, there's some AI deals that are in there. The -- and you can see that I think the average duration and I might be quoting in Q1, so I apologize if it's slightly off, but it's like 19 months. So you can get the average kind of length of the deal, say, call it, 2 years on the outside and sometimes 1 year, somewhere between 1 and 2 years is the typical for us because this is a relatively new motion for us. And it's great that we're getting customers that are used to and value the ability to just do straight consumption with us to make the commitments to -- for a minimum level of revenue over some period of time. So that's something that's very encouraging and speaks to the product innovation and the improvements we made in the core cloud and customers' confidence in our ability to continue to meet their needs. Paddy, if you want to add something to that?
Padmanabhan T. Srinivasan:
No, I think you nailed it, Matt. Yes, it is definitely a combination of both our core cloud as well as AI. So there's not. This is not just reflective of just one giant, huge deal or anything like that.
Mark Zhang:
Got it. And then just maybe a quick follow-up. Just on capital allocation. It seems like you guys have been stepping up on share repurchases since, I guess, end of last year. But now with the authorization going down to about $3 million, what's sort of the thought process around just capital allocation going forward?
W. Matthew Steinfort:
Yes. Our capital allocation, we actually reduced the amount of repurchases that we've been doing over the last 2 years. We did almost $500 million in 2023 and then across 2024 and into 2025, it was only $140 million. Our primary objective at the moment and we articulated this in Investor Day is it's all about organic growth and investing to drive organic growth. But then secondly, and as important is we're committed to making sure that we've taken care of the balance sheet, and we've addressed the outstanding convert. And we've said that we're going to do that by the end of this year, and we started that process with our $800 million bank facility, $500 million of that is a term loan. And so we're dialing back the share repurchases, just so that we can make sure that we take care of those first 2 objectives. And as soon as we take care of those 2 objectives, the first one will be ongoing, but the second being taken care of the outstanding convert, then we'll go back to a, let's say, a reasonable level of share repurchases that are targeted at offsetting dilution. So it's, I think, priority 1 is organic growth. Priority 2 is take care of the convert. And priority 3 is use the repurchases to offset dilution. And right now, priorities 1 and 2 are the bigger focus for the next quarter or so.
Operator:
Your next question comes from the line of Thomas Blakey with Cantor.
Thomas Blakey:
Congratulations on the results. I had a point of clarification first to -- I think it was Jason Ader's question earlier. Matt, did you say that the core cloud accelerated in 2Q? And then from a question perspective, I know the core AI is organic now, growing over 100%. What kind of derivative impact did it have to NDR, if any, Paddy or Matt? Just you would think there'd be some kind of like flow through of these customers buying more services on the platform. And I would just be curious to see what kind of impact that had on that metric.
W. Matthew Steinfort:
So on the second part of your question, the -- a lot of the AI customers that are coming to us are new customers, right? So they're in the -- particularly in the infrastructure side of AI. So they're not yet buying a tremendous amount of products on the core cloud side. And even if they did, they haven't been in the cohort long enough to count towards NDR. So they're not -- there's basically not much impact from that. That's the future benefit, which I think you're appropriately pointing out. And I'm sorry, can you repeat the first part of your question?
Thomas Blakey:
Yes. I think you said earlier on the call to a question that core cloud kind of excluding AI/ML accelerated. And I just wanted to make sure I heard that correctly.
W. Matthew Steinfort:
Yes. Yes. The year-over-year growth rates for the -- in the core cloud continue to improve. So again, when you look at the metric like NDR, it's again a function of what happened, the change in revenue last year compared to the change in revenue this year. So it's got a lot of kind of lagging components to it. On the core cloud in terms of the incremental ARR and kind of the overall ARR growth of the core business that continues to accelerate.
Operator:
Your next question comes from the line of Wamsi Mohan with Bank of America.
Wamsi Mohan:
Yes. I guess, firstly, on your AI customers, are you seeing higher volatility or churn in that customer base? And just to clarify, is the penetration of these customers, how would you categorize that between maybe learners, builders, scalers in your traditional way of thinking about the customers? Where are these in their journey? And any thoughts around graduation rates on these customers?
Padmanabhan T. Srinivasan:
Yes. Great question, Wamsi. It's good to hear from you. The -- it's a completely different customer acquisition motion. So we don't think of them as testers, learners, builders, scalers because they typically don't go through that journey on our platform. A lot of these customers are -- in the initial stages, there were a lot of very early-stage start-ups. But as we are seeing a lot of traction on the inferencing side, these customers in their own evolution or in their own progression have crossed some of the [ chasms ] in terms of both funding as well as finding product market fit and customer traction. But they're coming to us with inferencing needs that are scaling, which by definition means that they have found product market fit and now they have a captive audience that is willing to pay for their inferencing needs. So we are starting to see -- there was a lot of the test and leave kind of phenomenon in the fine-tuning on the training side last year. But now as we have started flipping more and more towards the inferencing side, these customers come, they stay, they expand and they start leveraging different parts of our stack described in my diagram. So it's a very different life cycle that we're seeing on this side.
Wamsi Mohan:
Okay. Great. And if I could follow up quickly with Matt. On the growth CapEx side, any incremental thoughts over here? I know you said organic investments and driving organic growth is sort of highest priority. So relative to your comments that you made last quarter, how should we be thinking about the growth CapEx profile over the next few quarters or into next year?
W. Matthew Steinfort:
Thanks, Wamsi. Yes, I think a couple of things. One, I would point to, again, we've increased the free cash flow margin guidance, and we feel good about that, relative to the growth rates that we're articulating. And what we said in the last quarter and say again is if we see the opportunity to accelerate growth beyond what we communicated at the Investor Day of 18% to 20% by 2027. We certainly do that, and we have a lot of tools in our toolkit to be able to do that in a capital-efficient and cash flow efficient way. So we're very confident -- remain very confident that we can grow revenue while maintaining attractive free cash flow margins.
Operator:
And ladies and gentlemen, that does conclude our question-and-answer session, and it does conclude today's conference call. Thank you for your participation, and you may now disconnect.

Here's what you can ask