Archives

Broadcom Extends Leadership in Custom Accelerators and Merchant Networking Solutions for AI Infrastructure

Broadcom

Cloud and data center providers are building AI systems at a pace that requires a new level of performance, scale and efficiency. Consumer AI use cases are increasingly driving the need for lowest power custom AI accelerators, while open, standards-based merchant networking solutions scale large AI clusters. Broadcom Inc is evolving a broad portfolio of technologies to extend its leadership in enabling next-generation AI infrastructure. This includes foundational technologies and advanced packaging capabilities aimed at building the highest performance, lowest power custom AI accelerators. In addition, the complete set of end-to-end merchant silicon connectivity solutions ranging from best-in-class Ethernet and PCIe to optical interconnects with co-packaging capabilities drives the scale-up, scale-out and front-end networks of AI clusters.

“For providers contending with the ever-increasing demand for generative AI clusters, the key to success will be a network-centric platform, based on open solutions, that scales at the lowest power,” said Charlie Kawwas, Ph. D., president of Broadcom’s Semiconductor Solutions Group. “The innovations we’ve introduced extend our leadership for custom AI accelerators, Ethernet, PCI Express and optical interconnect portfolios. Built on our world-class foundational technologies like SerDes and DSP, they provide the best custom XPUs and merchant networking solutions enabling AI infrastructure.”

Cloud and data center providers are building AI systems at a pace that requires a new level of performance, scale and efficiency. Consumer AI use cases are increasingly driving the need for lowest power custom AI accelerators, while open, standards-based merchant networking solutions scale large AI clusters. Broadcom Inc is evolving a broad portfolio of technologies to extend its leadership in enabling next-generation AI infrastructure. This includes foundational technologies and advanced packaging capabilities aimed at building the highest performance, lowest power custom AI accelerators. In addition, the complete set of end-to-end merchant silicon connectivity solutions ranging from best-in-class Ethernet and PCIe to optical interconnects with co-packaging capabilities drives the scale-up, scale-out and front-end networks of AI clusters.

Also Read: The Connectivity Standards Alliance Product Security Working Group Launches the IoT Device Security Specification 1.0 

“For providers contending with the ever-increasing demand for generative AI clusters, the key to success will be a network-centric platform, based on open solutions, that scales at the lowest power,” said Charlie Kawwas, Ph. D., president of Broadcom’s Semiconductor Solutions Group. “The innovations we’ve introduced extend our leadership for custom AI accelerators, Ethernet, PCI Express and optical interconnect portfolios. Built on our world-class foundational technologies like SerDes and DSP, they provide the best custom XPUs and merchant networking solutions enabling AI infrastructure.”

Broadcom’s latest AI infrastructure innovations include:

  • Delivery of its industry-first 51.2T Bailly CPO Ethernet switch. Broadcom Bailly delivers unprecedented bandwidth density and economic efficiency addressing connectivity challenges in data center switching and computing.
  • An expanded portfolio of proven optical interconnect solutions supporting 200G/lane for AI and ML applications. Broadcom’s industry-leading VCSEL, EML and CW laser technologies enable high-speed interconnects for front-end and back-end networks of large-scale generative AI compute clusters.
  • The industry’s first end-to-end PCIe connectivity portfolio. Broadcom’s new PCIe Gen5/Gen6 retimers, together with our PEX series switches, offer the lowest power solutions and unparalleled efficiency to interconnect CPUs, accelerators, NICs and storage devices.
  • Trident 5-X12 chip with neural network chip integrates NetGNT technology, marking a pioneering advancement in switching silicon by enabling it to adeptly identify traffic patterns typical in AI/ML workloads and effectively avert congestion.
  • Vision for AI acceleration and democratization outlined at OCP Global Summit 2023, spanning a combination of ubiquitous AI connectivity, innovative silicon, and open standards.

SOURCE: GlobeNewswire