Archives

Flex Logix to Speak on AI Inference at the Linley Spring Processor Conference and Computer Vision Summit

FLEX LOGIX TO SPEAK ON AI INFERENCE AT THE LINLEY SPRING PROCESSOR CONFERENCE AND COMPUTER VISION SUMMIT logo/IT Digest

Flex Logix® Technologies, Inc., supplier of fast and efficient edge AI inference accelerators and the leading supplier of eFPGA IP, announced that it will be speaking at two key industry shows in April: The Linley Spring Processor Conference on April 20-21st and the Computer Vision Summit on April 27th. The talks will focus around the company’s InferX™ AI inference accelerator, production boards and software solutions, which deliver the most efficient AI inference acceleration for advanced edge AI workloads such as Yolov5.

Linley Spring Processor Conference Presentation 1:

Presentation title: Meeting the Real Challenges of AI
Track: Session 1 Edge-AI Design
Speaker: Randy Allen, Vice President of Software for Flex Logix
Abstract: Machine Learning was first described in its current form in 1952. Its recent re-emergence is not the result of technical breakthroughs, but instead of available computation power. The ubiquity of ML, however, will be determined by the number of computational cycles we can productively apply subject to the constraints of latency, power, area, and cost. That has proven to be a difficult challenge. This talk will discuss approaches to creating parallel heterogeneous processing systems that can meet the challenge.
When: Wednesday, April 20th
Location: Hyatt Regency Hotel, Santa Clara
Time: 10:20am-12:20pm
Linley Spring Processor Conference Presentation 2:

Presentation title: High-Efficiency Edge Vision Processing Using Dynamically Reconfigurable TPU Technology
Track: Session 5 Edge AI Silicon
Speaker: Cheng Wang, CTO and Co-Founder of Flex Logix
Abstract: To achieve high accuracy, edge computer vision requires teraops of processing to be executed in fractions of a second. Additionally, edge systems are constrained in terms of power and cost. This talk will present and demonstrate the novel dynamic TPU array architecture of Flex Logix’s InferX X1 accelerators and contrast it to current GPU, TPU and other approaches to delivering the teraops performance required by edge vision inferencing. We will compare latency, throughput, memory utilization, power dissipation and overall solution cost. We’ll also show how existing trained models can be easily ported to run on the InferX X1 accelerator.
When: Thursday, April 21st
Location: Hyatt Regency Hotel, Santa Clara
Time: 1:05pm-2:45pm
Computer Vision Summit Presentation 1:

Presentation title: The Evolving Silicon Foundation for Edge AI Processing
Speaker: Sam Fuller, Head of AI Inference Product Management for Flex Logix
Abstract: To achieve high accuracy, edge AI requires teraops of processing to be executed in fractions of a second. Additionally, edge systems are constrained in terms of power and cost. This talk will present and demonstrate the novel dynamic TPU array architecture of Flex Logix’s InferX X1 accelerators and contrast it to current GPU, TPU and other approaches to delivering the teraops computing required by edge vision inferencing.

Also Read: RiverMeadow announces VM-based migration capability to any Public Cloud,…

We will compare latency, throughput, memory utilization, power dissipation and overall solution cost. We’ll also show how existing trained models can be easily ported to run on the InferX X1 accelerator.
When: Wednesday, April 27th
Location: San Jose Marriott
Time: 10:00am
Computer Vision Summit Presentation 2:

Panel Discussion: Developing Scalable AI Solutions
Speaker: Sam Fuller, Head of AI Inference Product Management for Flex Logix
Abstract: In this session, panelists will discuss the challenge of rolling out CV applications to have real impact.
When: Wednesday, April 27th
Location: San Jose Marriott
Time: 12:00pm

About Flex Logix

Flex Logix is a reconfigurable computing company providing AI inference and eFPGA solutions based on software, systems and silicon. Its InferX X1 is the industry’s most-efficient AI edge inference accelerator that will bring AI to the masses in high-volume applications