Union.ai, provider of the open-source workflow orchestration platform Flyte and its hosted version, Union Cloud, announced the release of UnionML at MLOps World 2022.
The open-source MLOps framework for building web-native machine learning applications offers a unified interface for bundling Python functions into machine learning (ML) microservices. It is the only library that seamlessly manages both data science workflows and production lifecycle tasks. This makes it easy to build new AI applications from scratch, or make existing Python code run faster at scale.
“UnionML significantly simplifies creating and deploying machine learning applications.”
UnionML aims to unify the ever-evolving ecosystem of machine learning and data tools into a single interface for expressing microservices as Python functions. Data scientists can create UnionML applications by defining a few core methods that are automatically bundled into ML microservices, starting with model training and offline/online prediction.
“Creating machine learning applications should be easy, frictionless and simple, but today it really isn’t.,” said Union.ai CEO Ketan Umare. “The cost and complexity of choosing tools, deciding how to combine them into a coherent ML stack, and maintaining them in production requires a whole team of people who often leverage different programming languages and follow disparate practices. UnionML significantly simplifies creating and deploying machine learning applications.”
Also Read: Mark Potter and Christian Klingler Join proteanTecs Advisory Board
UnionML apps comprise two objects: Dataset and Model. Together, they expose function decorator entry points that serve as building blocks for a machine learning application. By focusing on the core building blocks instead of the way they fit together, data scientists can reduce their cognitive load for iterating on models and deploying them to production. UnionML uses Flyte to execute training and prediction workflows locally or on production-grade Kubernetes clusters, relieving MLOps engineers of the overhead of provisioning compute resources for their stakeholders. Models and ML applications can be served via FastAPI or AWS Lambda. More options will be available in the future.