Interoperability  of data visualization tools  with SQL

 

In today's world of data management, information is often distributed across different data visualization platforms according to business needs, technical challenges, budgetary constraints, etc.

 

To optimize the management and use of this data, an interoperability approach based on SQL can restore dataviz tools to what they were originally: "visualization" tools - not data prep tools as they have tended to become, creating a damaging increase in opacity and inertia. 

 

This can be a relevant link in the context of a migration, for the sustainable modernization of a platform. 

 
 

SQL  as a  pivotal model 

Our proposed approach involves using SQL as the central element for modernizing data platforms.   This SQL will serve as the repository for the intelligence of the platform being decommissioned.

 

The intelligence contained in the source platform(s) can become common to various dataviz platforms, current or future. 

 

In this logic, we have developed an agnostic SQL generation engine from different dataviz technologies, enabling an "As Is" migration of all intelligence to SQL.

 

Our engine is compatible with the following technologies (others are under consideration):  

 

But how  do we migrate  to  SQL?

 

Introspection of the source technology

Our software {openAudit} retrieves structuring information such as the list of expressions and variables used, their level of nesting, the functions used in these expressions, the SQL queries, the useful fields and tables, the joins, the custom SQL, the contexts and prompts used in the dashboards, etc.

 

Migration to SQL

Based on this information, {openAudit} generates SQL to build data pipelines in the target database. The SQL generated from the source data visualization technology will be transposed into the target database, whether it be Azure SQL, BigQuery, Redshift, etc. 

 

The  expected  benefits 

 

Migration of intelligence to the target database

By creating a cross-platform, business-oriented data layer in the target database using new, simple, "flat" SQL pipelines, the data visualization technology used will only require simple queries. This will improve performance and facilitate future migrations.

 

Data visualization, nothing but data visualization

Teams working on data representation will be able to focus on creating reports and dashboards without worrying about data transformations, using simple, explicit models.

 

Shared maintenance for multiple tools

Centralizing SQL queries will simplify data visualization layer maintenance. Updates and modifications can be made directly at the database level, without complex interventions across multiple platforms.

 

Significantly improved performance

By shifting the complexity of the data visualization layer to simple SQL within the databases, performance will mechanically improve. Intelligence can be factored out and data persisted, thus reducing latency for certain data visualization tools. This is especially true since hyperscaler databases are particularly efficient and resource-light.

 

Conclusion 

 

The SQL-based interoperability approach enables the creation of a flexible and scalable data architecture.

 

By centralizing data intelligence using simple SQL, companies can maximize the efficiency of their data visualization teams while ensuring data quality and consistency. This process can be fully automated. Feel free to contact us if you'd like to learn more. 


Commentaires

Posts les plus consultés de ce blog

Power BI libère les utilisateurs… Mais comment garder la maîtrise de sa plateforme dans le temps ?

De la source à la cellule du dashboard : Cartographier le SI pour le reconstruire intelligemment

Migrer de SAP BO vers GCP Looker - Garder ses données en source ? Possible ?