Archives

CoreWeave Expands AI-Native Cloud Platform to Power Production-Scale AI

CoreWeave

CoreWeave announced a big expansion of its AI cloud platform at NVIDIA’s GTC conference. This update brings new features to speed up developing and deploying large-scale AI systems. Key to this update is the availability of NVIDIA HGX B300 infrastructure. Weights & Biases has made improvements to simplify reinforcement learning (RL) and agent development.

The expansion shows a shift in the industry. Companies are moving from large-scale model training to real-world AI deployment. They want to run and improve AI systems in production. This change boosts the need for infrastructure. It should support continuous learning, allow faster updates, and enable scalable performance. CoreWeave’s latest updates aim to connect model training with operational deployment. This is especially relevant for financial services, robotics, and industrial settings.

“The next phase of AI is being defined by how efficiently AI systems can run and scale in production,” said Michael Intrator, CEO, co-founder, and chairman at CoreWeave. “By pairing the massive compute power of NVIDIA’s latest hardware with CoreWeave’s cloud services, we’re enabling enterprises to build and refine autonomous agents faster and more reliably than ever before. This expansion reinforces our position as the essential partner for any organization navigating the complexities of frontier-scale AI.”

Also Read: Crusoe Launches ‘Spark Factory’ to Accelerate Production of Modular AI Infrastructure

The NVIDIA HGX B300 platform is part of the Blackwell Ultra architecture. It boosts performance for AI reasoning and inference. It features more memory, which supports large models and long workloads. Enhanced bandwidth and liquid cooling keep operations high-performing and consistent. These features are key for organizations building advanced AI systems. They need both speed and reliability.

CoreWeave is also advancing its roadmap with plans to deploy NVIDIA’s next-generation Vera Rubin NVL72 platform and Vera CPU rack later in 2026, further strengthening its ability to support complex AI applications.

In parallel, updates to Weights & Biases introduce environment-free reinforcement learning, allowing AI agents to learn directly from real-world interactions without relying on simulated environments. This approach reduces training costs and complexity while improving performance outcomes. Additionally, new production evaluation tools enable continuous feedback loops, helping teams identify issues, refine models, and accelerate development cycles using real-time data.

“The era of AI is shifting from training models to operating agents at scale,” said Jensen Huang, founder and CEO, NVIDIA. “CoreWeave is a world-class new generation AI-Native cloud. We are thrilled to partner with them to build out NVIDIA computing infrastructure to power the world’s AI.”

With these advancements, CoreWeave is positioning itself to support the next generation of AI applications, enabling enterprises to build, deploy, and optimize intelligent systems with greater efficiency and scale.