Archives

Prophecy Accelerates Adoption of Lakehouse Technology with Launch of “Prophecy for Databricks” as Demand for Modern Data Stack Skyrockets

Prophecy Accelerates Adoption of Lakehouse Technology with Launch of Prophecy for Databricks as Demand for Modern Data Stack Skyrockets

Prophecy, the leading low-code platform for data engineering, announced the launch of Prophecy for Databricks, a powerful new offering that makes it easier and faster to build data pipelines that deliver the data for business intelligence and machine learning. This platform, with a visual drag-and-drop canvas, enables anyone that wants to do data engineering to visually and interactively develop, deploy and monitor data pipelines on Apache Spark.

Built for use by both seasoned data engineering teams and non-programmer data citizens alike, Prophecy for Databricks enables many more users to build pipelines easily, move them to production and accelerate the transition of companies being data driven. With 10x users enabled, data teams experience a radical increase in operational excellence and data quality enabling them to manage more pipelines than ever before.

Also Read: Raising the ceiling of code-free programming for robotics and industrial automation

IDC has forecasted that data is being created at an annual growth rate of 23%, which mans 181 zettabytes of data will have been created by 2025. With data growing so quickly, corporations are struggling to keep up with processing the data at this pace. According to Gartner, the DBMS market is nearly $80B and grew 22% in just the last year, with the share of cloud DBMS growing even faster than the overall DBMS market.

Existing data engineering products do not meet the needs of companies and have proven to be unnecessarily complex and inefficient. With Prophecy for Databricks, companies can 10x data engineering with dramatic increases in data practitioners doing data engineering, individual productivity, reliability of data pipelines, and data quality.

“The industry need for data & analytics far outstrips what can be produced by data engineers programming in notebooks,” said Raj Bains, CEO and co-founder of Prophecy. “With this release of Prophecy for Databricks, we’re providing powerful, visual tools that enable an order of magnitude more data users to quickly develop data pipelines, at the same level as programmers. This expansion of data engineering to non-programmers is the only way to realize the potential of data at scale.”

With Prophecy for Databricks, companies can modernize their data pipelines on Spark through the platform’s core features, which include:

A Visual Development Environment – An intuitive, low-code, drag-and-drop IDE enables all data practitioners, from non-programmer to expert, to develop data pipelines on Spark quickly and easily. The platform turns the visual data pipeline into 100% open-source Spark code (PySpark or Scala), with interactive development and execution to verify that pipeline works correctly every step of the way.

Productivity Enhancement – The ability to build and extend custom data frameworks in the visual elements, standardizes and reuses components, leading to improved efficiency, better collaboration, and reduces risk. GIT integration allows for tracking and versioning of changes, test coverage ensures all changes are unit tested, CI/CD moves changes from development to production with high confidence, and metadata search and lineage ensures data can be tracked all the way back to the source.

Seamless Integration – Prophecy for Databricks integrates smoothly with existing Databricks data stack utilized by enterprises. The technology is deployed within a company’s existing Virtual Private Cloud (VPC) and integrates with all major data products and is extensible to support additional tools, including Delta Lake.

Through Databricks Partner Connect, users can start using Prophecy directly within the Databricks User Interface (UI).
The Prophecy for Databricks platform utilizes Databricks Workflows to simplify the orchestration and management of production workflows on any cloud, allowing for the running of code reliably in a Databricks cluster.