EDJX, the pioneer in decentralized global serverless edge computing, announced that it has formed a strategic partnership with Zeblok Computational to integrate the capabilities of the EDJX Platform with the Zeblok Ai-MicroCloud®, a cloud-native, turnkey ML ops platform that enables businesses to deploy Artificial Intelligence (AI) applications easily and efficiently to thousands of edge locations at scale.
EdjOS or the EDJX Platform makes it possible for developers to build IoT, AI, and M2M applications
The partnership will provide Zeblok customers access to the EDJX Platform capabilities utilizing EDJX compute, network, and storage resources. Zeblok’s Ai-MicroCloud® solves the problem of scaling at the edge, making it easy to deploy AI inferences to edge locations. Together the offering is a digital foundation for enterprises, Cloud Service Providers, Managed Service Providers, OEMs and ISVs to execute their AI strategies from Cloud-to-Edge for diverse use cases such as Smart Retail, Industry 4.0, Smart Cities, Smart Transportation and Logistics, and more.
EDJX provides a decentralized Operating System EdjOS or the EDJX Platform that makes it possible for developers to build IoT, AI, and M2M applications and have the requisite computations executed as close as possible to the sources of data. EdjOS enables developers to write, test and deploy smarter applications, data pipelines, websites and IoT solutions on a secure, serverless, peer-to-peer network that spans the globe. Utilizing the platform enables distributed compute resources to create a single fabric for the execution of IoT services, serverless functions, and related workloads. Partnering with Zeblok expands EDJX market reach and distribution capabilities in the AI applications marketplace.
The Zeblok Ai-MicroCloud® enables businesses to deploy an AI Platform-as-a-Service for ML ops developers to create and rapidly deploy AI inference engines. Customers can curate and aggregate their AI assets (algorithms, third-party ISVs, homegrown models, etc.) in their own Ai-AppStore for easy access, automated workflows, and rapid application development. Utilizing the same end-to-end application lifecycle management process upon a heterogeneous server topology can dramatically lower the “cost per insight” for AI inferences.