Arrcus, the hyperscale networking software company and a leader in core, edge, and multi-cloud routing and switching infrastructure, is proud to introduce its trailblazing networking solution, Arrcus Connected Edge for AI (ACE-AI), designed to revolutionize the networking industry for AI/ML workloads.
In the heart of digitally driven world, a monumental transformation is underway. Generative Artificial Intelligence (GenAI) has emerged as a powerful force, reshaping entire industries and redefining our digital landscape. At the epicenter of this transformative wave is the exponential growth of datacenter network traffic, driven by GenAI applications. AI/ML workloads will be increasingly distributed – at the Edge, in colos, telco PoPs/datacenters and public clouds. This growth underscores the urgent need for networks to evolve to become distributed, open, lossless and predictable to cater to this revolutionary paradigm shift.
“AI networking bandwidth is going to grow over 100% Y/Y in the second half of 2023 and throughout 2024 based on the remarkable growth in vendor revenue associated with AI/ML, and this class of networking bandwidth growth is nearly three times that of traditional data center networking,” said Alan Weckel, Founder and Technology Analyst at 650 Group. “We see AI/ML as a major catalyst for the growth of Data Center switching over the next five years and is likely to be distributed in nature. Arrcus’ adaptable ACE-AI platform effectively addresses these demands with its open and flexible network fabric, designed to unify the distributed AI/ML workloads wherever they may reside.”
Arrcus’ innovative ACE-AI networking solution based on ArcOS delivers a modern, unified network fabric for optimizing GPU and other distributed compute resources with maximum performance for AI/ML workloads. Ethernet as the underlying technology is well suited to address the needs with its inherent benefits for scalability, reliability, flexibility and low latency and ACE-AI builds on Ethernet to seamlessly weave together the entire network infrastructure, spanning from the edge to core to multi-cloud, encompassing switching and routing. Communication Service Providers (CSP), Enterprises and Datacenter operators can now harness the potential of 5G and GenAI by pooling compute resources wherever they may reside across the network to drive business outcomes.
Also Read: NIQ expands data sharing within the Connect platform using Snowflake technology
“AL/ML workloads, Large Language Models (LLM) and compute-intensive applications like GenAI need to be delivered in a distributed fashion, to enable pooling of scarce, expensive compute resources as well as ensure low latency at the point of consumption,” said Shekar Ayyar, Chairman and CEO, Arrcus. “With Arrcus ACE-AI, Enterprises, CSPs and Hyperscalers now have the ability to transition from the legacy, single vendor networks, to a more modern, scalable, and software-defined network with lower TCO in support of their AI expansion plans.”
For next-generation datacenters, ACE-AI enables traditional CLOS and Virtualized Distributed Routing (VDR) architectures, with massive scale and performance to provide lossless, predictable connectivity for GPU clusters with high resiliency, availability and visibility. Features like Priority Flow Control (PFC), intelligent congestion detection and buffering at ingress points to prevent packet drops, ensure lower Job Completion Times (JCT) and tail latency. In the always-on world of GenAI, network high availability is crucial and is supported with features like Hitless Upgrade, reducing software maintenance upgrade times to under 20ms.
SOURCE: Businesswire