Archives

Intel Innovation 2023: Accelerating the Convergence of AI and Security

Intel

During the second day of Intel Innovation 2023, Intel Chief Technology Officer Greg Lavender offered a detailed look at how Intel’s developer-first, open ecosystem philosophy is working to ensure the opportunities of artificial intelligence (AI) are accessible to all.

Developers eager to harness AI face challenges that impede widespread deployment of solutions for client and edge to data center and cloud. Intel is committed to addressing these challenges with a broad software-defined, silicon-accelerated approach that is grounded in openness, choice, trust and security. By delivering the tools that streamline development of secure AI applications and ease the investment required to maintain and scale those solutions, Intel is empowering developers to bring AI everywhere.

“The developer community is the catalyst helping industries leverage AI to meet their diverse needs – both and into the future,” Lavender said. “AI can and should be accessible to everyone to deploy responsibly. If developers are limited in their choice of hardware and software, the range of use cases for global-scale AI adoption will be constrained and likely limited in the societal value they are capable of delivering.”

Easing AI Deployment with Trust and Security

During the Innovation Day 2 keynote, Lavender highlighted Intel’s commitment to end-to-end security, including Intel® Transparent Supply Chain for verifying hardware and firmware integrity, and ensuring confidential computing to help protect sensitive data in memory, Intel is expanding on its platform security and data integrity protection with several new tools and services, including general availability of a new attestation service.

This service is the first in a new portfolio of security software and services called Intel® Trust Authority. It offers a unified, independent assessment of trusted execution environment integrity and policy enforcement, and audit records, and it can be used anywhere Intel confidential computing is deployed, including multi-cloud, hybrid, on-premises and at the edge. Intel Trust Authority will also become an integral capability to enable confidential AI, helping ensure the trustworthiness of confidential computing environments in which sensitive intellectual property (IP) and data are processed in machine-learning applications, particularly inferencing on current and future generations of Intel  Xeon processors.

AI is an engine of innovation with use cases across every industry, from healthcare and finance to e-commerce and agriculture.

Also Read: Data Theorem Receives 2023 Cloud Computing Security Excellence Award for its Industry-Leading Cloud Security Protections for Cloud-Native Applications

“Our AI software strategy is founded on open ecosystems and open accelerated computing to deliver AI everywhere,” said Lavender. “There are endless opportunities to scale innovation and we are creating a level playing field for AI developers.”

An Open Ecosystem Facilitates Choice with Optimized Performance

Organizations around the world are using AI to accelerate scientific discovery, transform business and improve consumer services. However, the practical application of AI solutions is limited by challenges that are difficult for businesses to overcome, from a lack of in-house expertise and insufficient resources to properly manage the AI pipeline (including data prep and modeling) to costly proprietary platforms that are expensive and time-consuming to maintain on an ongoing basis.

Intel is committed to driving an open ecosystem that allows for ease of deployment across multiple architectures. This includes being a founding member of the Linux Foundation’s Unified Acceleration Foundation (UXL). This cross-industry group is committed to delivering an open accelerator software ecosystem to simplify development of applications for cross-platform deployment. UXL is an evolution of the oneAPI initiative. Intel’s oneAPI programming model allows for code to be written once and deployed across multiple computing architectures, including CPUs, GPUs, FPGAs and accelerators.

SOURCE: Businesswire