Consolidation Process - Torch Migration Suite (TMS)

The migration process starts with the creation of the “Intermediate Database”, an image from the target database with no data. The entire process will be executed on this “Intermediate Database” preserving therefore the production database integrity and availability throughout the entire process.

The Zim Database Migration process using TORCH Migration Suite consists essentially of 4 successive steps:

Phase 1 - Data Dictionary analysis, clean up and adjustment

All metadata is analyzed and adjusted to work with SQL. Decimal places and length of numeric fields are adjusted to the SQL requirements, virtual fields are modified and many other modifications are done. Also here are generated the new Zim Data Dictionary ready to access the SQL DBMS and all scripts needed to create the new SQL database. The Data Dictionary Analyzer module is the responsible to bring the new data dictionaries into live during this step.

Phase 2 - Data Analysis and cleanse

It is well known that Zim databases can shelter inconsistent and invalid data in its files, like zeroed dates, foreign keys whit no corresponding parent and so on. This is due to the absence of data validation and referential integrity controls on the database engine.

On the other hand, while preparing the migration, fields may have their attribute change, e.g. an index field may become a primary key and“not required” fields transformed into “required”. These modifications may lead to inconsistencies within the existing data.

The Torch module “Data Analyzer” reads all tables definitions searching for candidates table fields that can contain those unacceptable values, reporting them to be fixed. Those not fixed, will have their values changed to default values during the data loader process.

Phase 3 - Application adjustment to SQL requirements

Zim language has a series of command styles that are not supported directly within the SQL language. A typical example is a Zim ADD command that generates a “Named Set”. Programmers also use to divide long commands that deal with several tables into a series of simpler commands to create their own access strategy. This kind of programming, despite being good Zim practice is not recommended and often not allowed in SQL causing sometimes severe performance problems.

The TORCH Source Code Analyzer reads the application source programs reporting all Zim commands that should be revised to improve its performance or due to incompatibility with the specific Server SQL language.

The report also categorizes programs according to their complexity to adapt to access the SQL server, which represents a strong indicator about the time needed to adjust the entire application.  The Source Analyzer also shows for some more complex commands, examples on how adjust the command to fit the SQL requirements.

Phase 4 - Target database Data Load

After the data dictionary is adjusted to SQL and the SQL database is created using the scripts generated in “step 1”, the real data may be loaded from the target Zim database into the new SQL Database. TORCH then generates a special database called the “LOADER DB” that contains definitions in Zim and Oracle, allowing read the Zim data and writes it into the SQL database. This special database is seen as a “foreign DB” and can be used from any other database.

For example: you can access the Loader DB from you production database and load its data directly into Oracle without disturbing the production environment, speeding up the loading process several times, minimizing the “database down time” and without the need of text files creation for unload and upload data. During the loading processes all invalid or out of bounds data is transformed into default values set by the user during the Torch Configuration process. You can repeat the loading process as many times as needed.