Rafay Systems, the leading Platform-as-a-Service (PaaS) provider for modern infrastructure and accelerated computing, announced that it has extended the capabilities of its enterprise PaaS for modern infrastructure to support graphics processing unit- (GPU-) based workloads. This makes compute resources for AI instantly consumable by developers and data scientists with the enterprise-grade guardrails Rafay customers leverage today. The company also launched a new AI Suite with standards-based pipelines for machine learning operations (MLOps) and large language model operations (LLMOps) to help enterprise platform teams quicken the development and deployment of AI applications for developers and data scientists.
The AI landscape has rapidly transformed, with AI and accelerated computing now evolving from an area of focus for small, specialist teams to permeating every aspect of application development and delivery for all businesses. Moreover, as the global GPU-as-a-Service market is expected to reach $17.2 billion by 2030, organizations actively seek scalable solutions to quickly and easily connect their data scientists and developers to expensive, in-short-supply accelerated computing infrastructure.
Rafay’s enterprise customers have long leveraged Rafay’s PaaS for modern infrastructure to rapidly give developers access to central processing unit- (CPU-) based infrastructure on-premises and in all the major public clouds, with guardrails included. The same issues that needed to be addressed for CPU-based workloads — environment standardization, self-service consumption of compute, secure use of multi-tenant environments, cost optimization, zero-trust connectivity enforcement and auditability — now have to be addressed with GPU-based workloads. Aspects such as cost are even more critical to control in the new age of AI.
In addition to applying its existing capabilities to GPU-based workloads, Rafay has extended its enterprise PaaS with features and capabilities that specifically support GPU workloads and infrastructure. Rafay makes AI-focused compute resources instantly consumable by developers and data scientists, enabling customers to empower every developer and data scientist to accelerate the speed of AI-driven innovation — and do it within the guidelines and policies set forth by the enterprise.
“I am immensely proud of Team Rafay for having extended our enterprise PaaS offering to now support GPU-based workloads in data centers and in all major public clouds,” said Haseeb Budhani, co-founder and CEO of Rafay Systems. “Beyond the multi-cluster matchmaking capabilities and other powerful PaaS features that deliver a self-service compute consumption experience for developers and data scientists, platform teams can also make users more productive with turnkey MLOps and LLMOps capabilities available on the Rafay platform. This announcement makes Rafay a must-have partner for enterprises, as well as GPU and sovereign cloud operators, looking to speed up modern application delivery.”
SOURCE: Businesswire