Archives

Intel Innovation 2023: Empowering Developers to Bring AI Everywhere

Intel

At its third annual Intel Innovation event, Intel unveiled an array of technologies to bring artificial intelligence everywhere and make it more accessible across all workloads, from client and edge to network and cloud.

“AI represents a generational shift, giving rise to a new era of global expansion where computing is even more foundational to a better future for all,” said Intel CEO Pat Gelsinger. “For developers, this creates massive societal and business opportunities to push the boundaries of what’s possible, to create solutions to the world’s biggest challenges and to improve the life of every person on the planet.”

More: Intel Innovation 2023 (Press Kit) | Innovation 2023: Bringing AI Everywhere (More News)

In a keynote presentation to open the event targeting developers, Gelsinger showed how Intel is bringing AI capabilities across its hardware products and making it accessible through open, multi-architecture software solutions. He also highlighted how AI is helping to drive the “Siliconomy,” a “growing economy enabled by the magic of silicon and software.” Silicon feeds a $574 billion industry that in turn powers a global tech economy worth almost $8 trillion.

Also Read: RCG Global Services Acquires Woodridge Software to Expand its FinTech Services Capabilities

New Advances in Silicon, Packaging and Multi-Chiplet Solutions

The work begins with silicon innovation. Intel’s five-nodes-in-four-years process development program is progressing well, Gelsinger said, with Intel 7 already in high-volume manufacturing, Intel 4 manufacturing-ready and Intel 3 on track for the end of this year.

Gelsinger also showed an Intel 20A wafer with the first test chips for Intel’s Arrow Lake processor, which is destined for the client computing market in 2024. Intel 20A will be the first process node to include PowerVia, Intel’s backside power delivery technology, and the new gate-all-around transistor design called RibbonFET. Intel 18A, which also leverages PowerVia and RibbonFET, remains on track to be manufacturing-ready in the second half of 2024.

Another way Intel presses Moore’s Law forward is with new materials and new packaging technologies, like glass substrates – a breakthrough Intel announced this week. When introduced later this decade, glass substrates will allow for continued scaling of transistors on a package to help meet the need for data-intensive, high-performance workloads like AI and will keep Moore’s Law going well beyond 2030.

Intel also displayed a test chip package built with Universal Chiplet Interconnect Express (UCIe). The next wave of Moore’s Law will arrive with multi-chiplet packages, Gelsinger said, coming sooner if open standards can reduce the friction of integrating IP. Formed last year, the UCIe standard will allow chiplets from different vendors to work together, enabling new designs for the expansion of diverse AI workloads. The open specification is supported by more than 120 companies.

The test chip combined an Intel UCIe IP chiplet fabricated on Intel 3 and a Synopsys UCIe IP chiplet fabricated on TSMC N3E process node. The chiplets are connected using embedded multi-die interconnect bridge (EMIB) advanced packaging technology. The demonstration highlights the commitment of TSMC, Synopsys and Intel Foundry Services to support an open standard-based chiplet ecosystem with UCIe.

Increasing Performance and Expanding AI Everywhere

Gelsinger spotlighted the range of AI technology available to developers across Intel platforms and how that range will dramatically increase over the coming year.

Recent MLPerf AI inference performance results further reinforce Intel’s commitment to addressing every phase of the AI continuum, including the largest, most challenging generative AI and large language models. The results also spotlight the Intel Gaudi2 accelerator as the only viable alternative on the market for AI compute needs. Gelsinger announced a large AI supercomputer will be built entirely on Intel Xeon processors and 4,000 Intel Gaudi2 AI hardware accelerators, with Stability AI as the anchor customer.

Zhou Jingren, chief technology officer of Alibaba Cloud, explained how Alibaba applies 4th Gen Intel Xeon processors with built-in AI acceleration to “our generative AI and large language model, Alibaba Cloud’s Tongyi Foundation Models.” Intel’s technology, he said, results in “remarkable improvements in response times, averaging a 3x acceleration.”

SOURCE: Businesswire