Archives

Domino Data Lab Extends Enterprise MLOps to the Edge with New NVIDIA Fleet Command Support

Domino Data Lab Extends Enterprise MLOps to the Edge with New NVIDIA Fleet Command Support logo/IT Digest
Domino Data Lab Extends Enterprise MLOps to the Edge with New NVIDIA Fleet Command Support logo/IT Digest

Domino Data Lab, provider of the leading Enterprise MLOps platform trusted by over 20% of the Fortune 100, announced new integrations with NVIDIA that extend fast and flexible deployment of GPU-accelerated machine learning models across modern tech stacks – from data centers to dash cams.

Domino is the first MLOps platform integrated with NVIDIA Fleet Command™, enabling seamless deployment of models across edge devices, in addition to Domino’s recent qualifications for the NVIDIA AI Enterprise software suite. New curated MLOps trial availability through NVIDIA LaunchPad fast-tracks AI projects from prototype to production, while new support for on-demand Message Passing Interface (MPI) clusters and NVIDIA NGC™ streamline access to GPU-accelerated tooling and infrastructure, furthering Domino’s market-leading openness.

Also Read: Sequitur Labs Releases Turnkey Solution to Simplify Protection of Edge AI Models Powered by the NVIDIA Jetson Platform

“Streamlined deployment and management of GPU-accelerated models bring a true competitive advantage,” said Thomas Robinson, VP of Strategic Partnerships & Corporate Development at Domino. “We led the charge as the first Enterprise MLOps platform to integrate with NVIDIA AI Enterprise, NVIDIA Fleet Command, and NVIDIA LaunchPad. We are excited to help more customers develop innovative use cases to solve the world’s most important challenges.”

Edge Device Support Streamlines Model Deployment across Modern Tech Stacks through MLOps
Domino’s new support for the Fleet Command cloud service for edge AI management further reduces infrastructure friction and extends key enterprise MLOps benefits — collaboration, reproducibility, and model lifecycle management — to NVIDIA-Certified Systems™ in retail stores, warehouses, hospitals, and city street intersections.

Available now, this integration relieves data scientists of IT and DevOps burdens as they manage, build, deploy, and monitor GPU-accelerated models at the edge. Data scientists can quickly iterate on models using Domino’s Enterprise MLOps Platform, then use Fleet Command to orchestrate the edge AI lifecycle using the turnkey solution to streamline deployments, manage over-the-air updates, and monitor models with minimal infrastructure footprint.

Accelerated Proof-of-Concepts with the First MLOps Platform on NVIDIA LaunchPad
Further deepening Domino’s collaboration with NVIDIA to accelerate model-driven business, the company’s Enterprise MLOps platform is also now the first available through the NVIDIA LaunchPad program. LaunchPad enables enterprises to get immediate, short-term access to NVIDIA AI Enterprise on VMware vSphere with Tanzu running on private accelerated compute infrastructure and curated labs.