How to prevent your information system from migrating... only to become more complex?

 

How to prevent your information system from migrating... only to become more complex?

In IT transformation projects, one injunction keeps coming up: “We must migrate!”
Towards the Cloud, towards modern architectures, towards more agile, simpler, more scalable solutions… in short, towards better! 

In recent years, we have seen the emergence of tools that have reshuffled the data game: Power BI, Looker, Strategy for dataviz, dbt for transformations, Snowflake or BigQuery for storage. A non-exhaustive list...: cloud-native solutions, lightweight, efficient, accessible (often) - and which accelerate the obsolescence of old architectures. 

 

{openAudit} , our software, allows these migrations to be carried out in an ultra-efficient manner by automating most of the tasks. 

 

But in the long run, each migration is actually a milestone. The worst thing would be to let the target system drift from the start, which would quickly wipe out years of work! 

It will therefore be necessary to carry out a series of actions which will enable IT and business teams to maintain an optimal and interoperable target system in the long term 

 

 

All of the automated migrations that we conduct with {openAudit} are based on continuous technical reverse  engineering that allows us to expose the processing chains : ETL/ELT jobs, procedures in the feed layers, management rules in the data visualization layer.

 

This ultra-granular knowledge of the source  will make it possible to intelligently reconstruct the processing chains  in the target technologies, with our different migration engines, using SQL extensively as output .

This is the case in power supplies, but also for the dataviz layer for which {openAudit}   will also make it possible to factor intelligence and concentrate it in the databases.

The answers will be adjusted to allow operability with the target technologies. 

 

Why do we think it is essential to use SQL in this context? 

  • SQL will provide maximum efficiency of the target system  by taking advantage of the power of modern databases.
  • SQL will ensure  optimal maintainability  : the majority of data engineers know SQL, especially since our migration engines generate flat, very intelligible SQL. 

 

Above all, SQL, which serves as a pivot point in the migration, will ensure future, near-native technical interoperability, thanks to the low reliance on proprietary technologies.  This will avoid a "forced marriage" ad vitam aeternam with this or that technology... 

 

To monitor the deployment of a system, {openAudit}  will immediately require continuous analysis of all technical layers of the target system. This will provide an intelligible map and allow everyone to share their understanding of the target system.

 

  • Power side: stored procedures, ETL/ELT jobs, SQL pipelines, file transfers, scripts, batches, etc.;
  • On the restitution side: queries, semantic layer, intelligence (expressions calculated in BI tools). 

 

We will identify the flows, transformations, tools... right down to the cell visible in the final dashboard! 

 

The business and IT teams will thus immediately have the tools to share a detailed and continuous understanding of the target system, without having to rely on this or that expert. 

 

We will add to this data lineage  the analysis of uses  in the target system:

Which objects (dashboards, tables) are actually used in the target system, and when? By whom? How often? In what contexts? What is the business value of each processing chain (regulatory, management, etc.)?

 

This includes:

  • Identification of data used in critical chains, but also of flows consumed by satellite applications or rarely used;
  • Taking into account low but strategic uses (e.g.: rarely consulted but regulatory data).

 

By combining lineage + usage, we will obtain a clear vision of:

  • What will need to be preserved over the long term: essential flows, "information highways", key dashboards;
  • What can be deleted over time  : "dead branches", unnecessary processing, orphaned tables, dashboards never consulted. 

 

This will allow the target system to remain as simple as possible over time. This will have positive effects in many respects: maintenance, quality of information, contained costs. 

 

In short: our idea is to enable the implementation of a simple, streamlined, documented, managed, and natively interoperable target system.  By automating the entire process. 

 

Migration will become a lever for simplification, as much as a very long-term technical project, aimed at truly regaining control of your data stack! 

 

Commentaires

Posts les plus consultés de ce blog

Migration automatisée de SAP BO vers Power BI, au forfait.

La 1ère action de modernisation d’un Système d'Information : Ecarter les pipelines inutiles ?

Migration Talend-dbt : un passeport pour moderniser ses données