Archives

Aetina SuperEdge Powered by NVIDIA A2 GPUs Completes NVIDIA Certification to Deliver High Performance at the Edge

Aetina SuperEdge Powered by NVIDIA A2 GPUs Completes NVIDIA Certification to Deliver High Performance at the Edge logo/IT Digest
Aetina SuperEdge Powered by NVIDIA A2 GPUs Completes NVIDIA Certification to Deliver High Performance at the Edge logo/IT Digest

Aetina SuperEdge AIS-D422-A1, with high-level GPUs, can be used as an AI training platform in different fields including smart cities, smart factories, and smart retail. The platform powered by the NVIDIA A2 Tensor Core GPU has been certified by NVIDIA as an AI inference platform that delivers superior performance at the edge. The NVIDIA-Certified program includes an extensive suite of tests to validate the best system configuration for performance, security, and manageability, and the SuperEdge AI-D422-A1 has passed in the industrial edge category.

The SuperEdge AIS-D422-A1 brings various advantages to its users. It has a rich I/O interface to support a variety of peripherals and can be integrated into many types of AI applications. Besides running AI training and inference tasks, AIS-D422-A1 enables its users to monitor multiple edge AI devices with Aetina’s EdgeEye, a type of remote monitoring software; after installing the software on AIS-D422-A1 and edge AI devices, the users are able to view system status of the edge devices through EdgeEye dashboard.

Also Read: mimik Technology edgeEngine Extends the Reach of IBM Edge Application Manager to Smart Devices

AIS-D422-A1 can be paired and combined with NVIDIA A2 Tensor Core GPU to support NVIDIA’s AI tools and services with better performance.

NVIDIA has developed convenient tools and services to help AI developers and system integrators with their projects. The developers and integrators can run different inference processes with NVIDIA Triton™ Inference Server, and the SuperEdge AIS-D422-A1 powered by the NVIDIA A2 Tensor Core GPU can make the processes faster.

NVIDIA TAO, a framework that lets application developers create custom production-ready models, in hours rather than months without AI expertise or large training sets, simplifies and accelerates the creation of enterprise AI applications and services. The optimized models from TAO can be integrated into applications and deployed via NVIDIA Fleet Command, a cloud service that securely deploys and manages AI applications across distributed edge infrastructure in their systems.

To help people complete their AI tasks quickly without spending time doing command-line work, Aetina has developed Aetina Triton Inference Server (ATIS) and Aetina TAO Utility (ATU)—GUI tools based on NVIDIA Triton™ Inference Server and NVIDIA TAO; the tools, a part of Aetina’s Pro-AI service, make AI training, AI inference, and PoC tasks much smoother for application developers.