Vultr, the world’s largest privately-held cloud computing company, and Domino Data Lab, provider of the leading Enterprise MLOps platform trusted by over 20% of the Fortune 100, announced the integration of Domino Nexus with Vultr’s Kubernetes Engine. This new integration helps businesses achieve competitive advantage in the era of generative AI by accelerating innovation while balancing compute cost, performance, and availability with seamless bursting of cutting-edge AI workloads to GPU-accelerated compute clusters across cloud and on-premises environments.
“Customers seeking AI-driven competitive advantage must grapple with staggering GPU demand and cost pressures”
This announcement delivers on Vultr and Domino’s recently announced partnership, which gives enterprise data science teams unparalleled access to state-of-the-art NVIDIA-powered cloud infrastructure on Vultr, including NVIDIA A100 and H100 Tensor Core GPUs to train, deploy, and manage their own deep learning models with speed, flexibility and affordability. Vultr and Domino are both members of the NVIDIA Partner Network program.
“Customers seeking AI-driven competitive advantage must grapple with staggering GPU demand and cost pressures,” said Nick Elprin, CEO and co-founder at Domino Data Lab. “Our integration with Vultr provides enterprises on-demand compute to keep developing cutting-edge AI without budget overspend.“
The new joint offering is underpinned by Vultr Kubernetes Engine (VKE) and Domino’s hybrid- and multi-cloud architecture, Nexus, to break down data science silos and open up flexible compute options, with cost, performance, and scale in mind. Built around a commitment to openness, flexibility and open standards, it further democratizes AI innovation for teams of any scale, budget and location.
- Unified Data Science: Domino Nexus’ unified MLOps platform orchestrates governed, self-service access to common data science tooling and infrastructure across all environments, including Vultr – alleviating infrastructure capacity and data sovereignty challenges during model training.
- Flexible and Interoperable: Domino’s Kubernetes-native platform runs seamlessly on VKE, with the CNCF-certified and MACH-compliant VKE providing automated container orchestration with support for geographically redundant clusters, so users can operate with confidence to easily scale data science workloads across Vultr’s worldwide locations without fear of vendor lock-in or outages.
- Cost Effective and Agile: Vultr offers a variety of full and fractional NVIDIA A100 and NVIDIA H100 Tensor Core GPU configurations, enabling enterprises with the agility to optimize infrastructure based on AI workload demands at a significantly lower cost. Data transfer costs are minimized with Vultr’s global bandwidth pricing plan.
SOURCE: Businesswire