New advancements in Tableflow bring seamless integration of operational data with analytical systems, enhancing
AI capabilities and next-gen applications
Confluent’s Tableflow is set to revolutionize how enterprises handle real-time data, making AI and next-generation applications more accessible and effective. Confluent has announced significant advancements in Tableflow, the easiest way to access operational data from data lakes and warehouses. With Tableflow, all streaming data in Confluent Cloud can be accessed in popular open table formats, unlocking limitless possibilities for advanced analytics, real-time artificial intelligence (AI), and next-generation applications. Support for Apache Iceberg™ is now generally available (GA). Additionally, a new early access program for Delta Lake is now open, thanks to an expanded partnership with Databricks. Tableflow also offers enhanced data storage flexibility and seamless integrations with leading catalog providers, including AWS Glue Data Catalog and Snowflake’s managed service for Apache Polaris™, Snowflake Open Catalog.
Shaun Clowes, Chief Product Officer at Confluent, emphasized the platform’s significance: “At Confluent, we’re all about making your data work for you, whenever you need it and in whatever format is required. With Tableflow, we’re bringing our expertise of connecting operational data to the analytical world. Now, data scientists and data engineers have access to a single, real-time source of truth across the enterprise, making it possible to build and scale the next generation of AI-driven applications.”
“At Confluent, we’re all about making your data work for you, whenever you need it and in whatever format is required.”
— Shaun Clowes, Chief Product Officer, Confluent
Tableflow simplifies the integration between operational data and analytical systems. It continuously updates tables used for analytics and AI with the exact same data from business applications connected to Confluent Cloud. This ensures that only high-quality, consistent data is used to feed data lakes and warehouses, which is crucial for the effectiveness of AI.
Key Updates to Tableflow
- Support for Apache Iceberg: Ready for production workloads, allowing teams to represent Apache Kafka® topics as Iceberg tables for real-time or batch processing.
- Early Access Program for Delta Lake: Provides a consistent view of real-time data across operational and analytic applications, enabling smarter AI-driven decision-making.
- Bring Your Own Storage: Offers flexibility to store Iceberg or Delta tables with the freedom to choose a storage bucket.
- Enhanced Data Accessibility and Governance: Integrations with Amazon SageMaker Lakehouse via AWS Glue Data Catalog and Snowflake Open Catalog streamline access for various analytical engines and data solutions.