Thursday, November 21, 2024
HomeHow ToHow to: Synchronize Data of a Big Database

How to: Synchronize Data of a Big Database

Recently, one of our customers asked us how to synchronize large databases if the size of the generated synchronization script couldn’t be opened by dbForge Data Compare for SQL Server for editing. He tried to synchronize his large database, but the generated synchronization script (Table Data Update Script of all tables) was of a great size, so the program could not open it and recommended direct execution. However, the user wanted to review the script and edit it before execution.

Such cases are quite common, so here is the solution we recommend.
If you have a big database, synchronize it in three stages:


First stage
At first, you should pick for the synchronization few tables containing the greatest number of different rows.  After the synchronization repeat the procedure if necessary.

Second stage
This is the turn of the tables containing medium number of different rows. You can select dozen of the tables and synchronize them.

Third stage
This is the final synchronization stage. By this moment only the tables with small number of different records should remain unsynchronized. Take the whole bunch of these tables
and synchronize them.

Using this approach, on each step you get a synchronization script of acceptable size, easy to open for review and editing in any editor including the internal one in dbForge Data Compare for SQL Server.

The aforementioned user followed our recommended approach and successfully completed the synchronization task.

RELATED ARTICLES

2 COMMENTS

    • Hello Ralf,
      Well, the maximum size of databases being compared directly depends on the chosen way of data comparison.
      Depending on the

      chosen way of comparison

      dbForge Data Compare for SQL Server v1.10 can compare rather large databases if there is not a lot of differences between them. The restriction of this way of comparison is connected with the limitation of free memory. As it is generally known, for 32-bit systems maximum amount of RAM can not exceed 4Gb, but it does not mean that the maximum size of the database being compared should not exceed 4Gb. The formula, using which you can roughly estimate the maximum size of database being compared looks like the following:
      Sum of the size of every record with the “different” status+sum of the comparison key columns of records that are of the “Only in Source”, “Only in Target” or “Equal” status.
      That’s why when comparing tables with large number of records one by one and excluding columns with the LOB data type from the tables, quite large databases can be compared.
      Now we are working on dbForge Data Compare for SQL Server v2.00 that provides new way of data comparison without all disadvantages mentioned above. The version 2.00 of our data comparer we’ve already tested on several large databases (with the size of 120Gb: some tables in these databases had 2.4 billion records, some tables with LOB data of 1.5Gb size, the FILESTREAM table with records of 5.3Gb size). Also we’ve tested dbForge Data Compare for SQL Server v2.00 on client databases. For example, the new version of the comparer has compared and synchronized a database of one pharmaceutical company holding data on medical substances and that was of 60Gb size. We plan to release the new version of the product in the end of March, 2010.
      Best regards.

Comments are closed.

Whitepaper

Social

Topics

Products