Getting out of the “technical servitude” in the Data Systems

 

Getting out of the

“technical servitude” in the

Data Systems

For thirty years, data has been built on a succession of transformation tools : ETL, ELT, with increasingly complex jobs, therefore increasingly long pipelines and obviously a multitude of data visualization tools. 

 

However, the “technical servitude” does not so much come from the multiplicity of solutions as from their opacity:
when no one really knows how a pipeline works from end to end, when the business rules differ from one tool to another in the data visualization layer without anyone clearly identifying it, then the data loses much of its value. 

 

A near-unanimous desire is emerging today: to break free from this "servitude" through genuine interoperability.


And paradoxically, it is SQL - the language of the past and the horizon of the future - that seems to be becoming the key to building truly interoperable systems.

1. Automatically analyze existing systems

  • ETL/ELT jobs,
  • Stored procedures, 
  • Intelligence embedded in data visualization tools, etc. 

Each chain is fully reconstructed through automatic reverse engineering, including in very old proprietary/legacy environments.

2. Reconstruct the entire processing chain in SQL

Our engines allow these business rules and intelligence to be translated into flat SQL  readable, portable, executable on any modern database. 

Even the logic embedded in dataviz tools can “slide” into SQL to the Cloud DWH to refocus BI tools on a minimalist/lightweight role: querying and visualizing data. 

 

 The generated SQL   can become the cornerstone of the data system, for a truly interoperable system...

3. Deploy this SQL in an open and non-captive ecosystem

Because this SQL does not depend on any "vendor", it can be re-injected everywhere, without recreating any technological dependencies: 

  • Dans un Cloud DWH moderne (BigQuery, Snowflake, Azure SQL, Redshift, etc.)

  • Dans une plateforme ouverte (Parquet + moteur SQL distribué), gouvernée par le data lineage - nous proposons des migrations largement automatisées vers ce type d'architectures.

  • Dans un autre ETL / ELT si nécessaire (dbt par exemple), mais léger et ouvert. 

 

Commentaires

Posts les plus consultés de ce blog

E-book / Maîtriser & Simplifier une plateforme SAP BO avec {openAudit}

La 1ère action de modernisation d’un Système d'Information : Ecarter les pipelines inutiles ?

Migration Talend-dbt : un passeport pour moderniser ses données