Detail of Objects for Data Copy

ibb.Cargar_Tablas_Secuencias and ibb.Cargar_Tablas_Secuencias_Copia_Origen

In the source and destination databases the table 'ibb.Cargar_Tablas_Se Sequences' is created. It stores the sequences in the databases and the information necessary to reinitialize them. After the data is copied, the sequences should start with a value that allows the system to run smoothly.

Field Type NULL Check Description
SchemaSeqName sysname NOT NULL Sequence Schema
SequenceName sysname NOT NULL Sequence Name
SchemaSequenceName CONCAT(QUOTENAME(SchemaSeqName), N'.', QUOTENAME(SequenceName)) PERSISTED NOT NULL Sequence Schema and Name
SchemaName sysname NULL Table Schema
TableName sysname NULL Table Name
ColumnName sysname NULL Name of the field used to reinitialize the Sequence.
SchemaTableName IIF(SchemaName IS NULL OR TableName IS NULL, NULL, CONCAT(QUOTENAME(SchemaName), N'.', QUOTENAME(TableName))) PERSISTED Table Schema and Name
QueryAlternateToColumn nvarchar(max) NULL CHECK CONSTRAINT Cargar_Tablas_Secuencias_ColumnOrQuery_CHK CHECK(
    (QueryAlternateToColumn IS NULL AND SchemaName IS NOT NULL AND TableName IS NOT NULL AND ColumnName IS NOT NULL) OR
    (QueryAlternateToColumn IS NOT NULL AND SchemaName IS NULL AND TableName IS NULL AND ColumnName IS NULL) OR
    (QueryAlternateToColumn IS NULL AND SchemaName IS NULL AND TableName IS NULL AND ColumnName IS NULL))
If the sequence is not reinitialized from a field in a table, a set of instructions must be supplied to allow this action

For the reinitialization of a sequence that performs the role of an IDENTITY, you only have to indicate the field in this table. Insert Bulk Bestia will obtain the maximum +1 of said field and with that value said sequence will be reinitialized. To do this, the 'SchemaName', 'TableName', and 'ColumnName' fields that define said field must be filled in, and the 'QueryAlternateToColumn' field of the 'ibb.Cargar_Tablas_Sequences' field must be set to NULL.

On other occasions, the sequence, although it is associated with a field in a table, its reinitialization value cannot be obtained from the maximum, but a statement is needed to obtain its maximum. Said statement must be included in the 'QueryAlternateToColumn' field and leave null the fields of 'ibb.Cargar_Tablas_Secuencias' indicated above.

In this case, the statement to be entered must begin exactly with SELECT @ output =. In @output you must leave the value with which the sequence must be reinitialized, that is, the first value that the sequence must take.

Examples:

  • Independent sequence:
    SELECT @output = 1
  • Sequence used to insert into two tables:
    SELECT @output = ISNULL(MAX(Tabla.Campo), 0) + 1
    FROM (
        SELECT MAX(Campo1) Campo FROM dbo.Tabla1
        UNION ALL
        SELECT MAX(Campo2) Campo FROM dbo.Tabla2
    ) AS Tabla
  • Sequence used as part of a character string:
    Campo
    00000000000001K
    00000000000002K
    00000000000004K
    00000000000007K
    00000000000016K
    00000000000017K

    SELECT @output = ISNULL(MAX(CONVERT(INT, SUBSTRING(Campo, 1, LEN(Campo) - 1))), 0) + 1 FROM dbo.Tabla

ibb.Cargar_Tablas_Actualiza_Secuencias

Stored procedure to update table 'ibb.Cargar_Tablas_Secuencias' on the source and on the destination.

Add the new sequences and delete the ones that have disappeared. Insert Bulk Bestia will stop the execution of the data load if this procedure detects that there are new sequences since the last time it was executed. In this way, it will allow the modification of the table 'ibb.Cargar_Tablas_Sequences' to define how each sequence should be reinitialized.

ibb.Cargar_Tablas e ibb.Cargar_Tablas_Copia_Origen

These two tables are created in the source and target databases. The usefulness of these tables was described in previous sections. Here, the detail of the same and the usefulness of certain fields will be described.

Field Type NULL Check Description
Configuration nvarchar(512) NOT NULL Configuration name
SchemaName_Origen sysname NOT NULL Source table schema
TableName_Origen sysname NOT NULL Name of the source table
SchemaTableName_Origen CONCAT(QUOTENAME(SchemaName_Origen), N'.', QUOTENAME(TableName_Origen)) PERSISTED NOT NULL Schema and Name of the source table
SelectCampos_Oracle_Origen nvarchar(max) NOT NULL Reserved for the future
SelectCampos_SQLServer_Origen nvarchar(max) NOT NULL SELECT clause indicating the fields to be copied from the source table. Here you can make the desired transformations to the data. The order of the fields must be the same as that used in the SELECT clause of the destination.
Condition_Segment_Oracle_Origen nvarchar(max) NULL Reserved for the future
Condition_Segment_SQLServer_Origen nvarchar(max) NULL WHERE clause for filtering the records in the source table. Defines the records to be copied to the destination. Normally, it should match the filter used on the source table.
SchemaName_Destino sysname NOT NULL Target table schema
TableName_Destino sysname NOT NULL Target table name
SchemaTableName_Destino CONCAT(QUOTENAME(SchemaName_Destino), N'.', QUOTENAME(TableName_Destino)) PERSISTED NOT NULL Target table schema and name
SelectCampos_Oracle_Destino nvarchar(max) NOT NULL Reserved for the future
SelectCampos_SQLServer_Destino nvarchar(max) NOT NULL SELECT clause indicating the target fields in the target table. The order of the fields must be the same as that used in the SELECT clause of the source.
Condition_Segment_Oracle_Destino nvarchar(max) NULL Reserved for the future
Condition_Segment_SQLServer_Destino nvarchar(max) NULL WHERE clause for filtering the records in the target table. Defines the records that will be deleted in case of retries. Normally, it should match the filter used on the source table.
SegmentID tinyint NOT NULL Cargar_Tablas_SegmentID_DEF DEFAULT (0) Segment identifier. Used for the segmentation of large tables, allowing the loading of the table in parallel.
RowsCount int NULL Number of rows in the source table
DataKb int NULL Space (approximate) occupied by table data.
IndexSize int NULL Space (approximate) occupied by table indexes.
Durability tinyint NOT NULL Cargar_Tablas_Durability_DEF DEFAULT (0) sys.tables
Is_memory_optimized bit NOT NULL Cargar_Tablas_Is_memory_optimized_DEF DEFAULT (0) sys.tables
ClasificacionTabla tinyint NOT NULL Cargar_Tablas_ClasificacionTabla_DEF DEFAULT (0) Identifier of the group of tables to which the table belongs. Allows you to classify tables.
CargarTabla tinyint NOT NULL Cargar_Tablas_CargarTabla_DEF DEFAULT (0) Flag that indicates if the table data should be copied.
Resultado_CargarTabla tinyint NULL Result of copying data from table
FilasCopiadas int NULL Rows copied from source to destination

The following example describes the usefulness of each of the fields. In it, only the most relevant fields of the table 'ibb.Cargar_Tablas' are shown and the origin, destination and selection of the load are detailed.

Source Selection:

SchemaTableName_Origen SelectCampos_SQLServer_Origen Condition_Segment_SQLServer_Origen
[dbo].[Tabla1] SELECT [Campo1] * 100 AS Campo1, [Campo2], [Campo3] NULL
[dbo].[Tabla2] SELECT [Campo1], [Campo2], [Campo3] Campo3 < CONVERT(DATETIME2,'01/01/2013',103)
[dbo].[Tabla2] SELECT [Campo1], [Campo2], [Campo3] Campo3 >= CONVERT(DATETIME2,'01/01/2013',103) AND Campo3 < CONVERT(DATETIME2,'01/01/2019',103)
[dbo].[Tabla2] SELECT [Campo1], [Campo2], [Campo3] Campo3 >= CONVERT(DATETIME2,'01/01/2019',103)

At the source there are 2 tables:

The first has 3 fields and a transformation has been applied to the first. When Insert Bulk Bestia automatically generates this table, this field would be filled with "SELECT [Campo1], [Campo2], [Camop3]", however, either manually, or with the stored procedure 'ibb.Cargar_Tables_Acciones_Programables_Origen', this clause can be modified.

The second table has been segmented in order to carry out a parallel loading of said segments, thus accelerating the loading of said table. The segmentation has been carried out using "Field3", so it would be highly recommended that said field had the corresponding index that facilitates the selection of records according to the WHERE clauses.

Target Selection:

SchemaTableName_Destino SelectCampos_SQLServer_Destino Condition_Segment_SQLServer_Destino
[dbo].[Tabla1] SELECT [Campo1], [Campo3] NULL
[dbo].[Tabla2] SELECT [Campo1], [Campo2], [Campo3] Campo3 < CONVERT(DATETIME2,'01/01/2013',103)
[dbo].[Tabla2] SELECT [Campo1], [Campo2], [Campo3] Campo3 >= CONVERT(DATETIME2,'01/01/2013',103) AND Campo3 < CONVERT(DATETIME2,'01/01/2019',103)
[dbo].[Tabla2] SELECT [Campo1], [Campo2], [Campo3] Campo3 >= CONVERT(DATETIME2,'01/01/2019',103)

At the destination you also have the same tables. However, "Campo2" has been eliminated from the first table, so the correspondence between fields will be:

Campo1(Campo1 * 100) -> Campo1 y Campo2 -> Campo3

The operator must check the data compatibility between Campo2 and Campo3 and if that assignment is not correct, it should modify the SELECT clause of the source or remove "Campo3" from the SELECT clause of the destination.

It must be taken into account that no transformation can be applied in the SELECT clauses of the destination, since it is simply used to indicate which are the destinations of each field of the source. The data transformations must be performed in the SELECT clauses of the source.

As in the origin, in "Tabla2" the corresponding table segmentation must be performed, and the corresponding WHERE clauses that define the data segments must be added. This is necessary for two reasons:

  • When a segment must be loaded, it must be possible to empty said segment at the destination.
  • When an error occurs (a network outage, for example) and Insert Bulk Bestia retries the load of said segment, you must be able to delete the records already copied in that segment to start over.

As in the source table, it is desirable that the fields used in the segmentation of the destination table have the necessary index (s) to increase efficiency in the selection of records. Thus, in a new execution of the data copy, the emptying of the segment will be efficient.

However, in the case of a retry to load a certain segment because it failed, Insert Bulk Bestia must delete all the records in the segment to reload them. In this case, the indexes may not exist due to the logic implemented in Insert Bulk Bestia. If this happens, it is very possible that the deletion is excessively slow.

The logic implemented in Insert Bulk Beast for removing indexes is:

  • It does not eliminate the cluster indexes of those tables in which the field 'DataKb' of 'ibb.Cargar_Tablas' is greater than the threshold indicated in 'Minimum size (KB) not to eliminate Cluster index' of Panel Carga.
  • For those tables that have change tracking enabled by CDC, it does not delete any index used by change tracking, nor its cluster index.
    CDC change tracking: The database has in 'sys.databases' the field 'is_cdc_enabled' to 1 and the table has in 'sys.tables' the field 'is_tracked_by_cdc' to 1.
  • For those tables that have change tracking enabled by CT, it does not delete the Primary Key, nor its cluster index. CT Change Tracking: Table appears in 'sys.change_tracking_tables'
  • It does not remove any index from those tables that Insert Bulk Beast has not previously truncated. Insert Bulk Bestia truncates a table previously when:
    • You want to load all the table segments and the table is not a memory-optimized table.
    • If the segments to be loaded span all the records contained in the table and the table is not a memory-optimized table.

In other words, it is quite likely that Insert Bulk Beast will drop the indexes of the segmented tables in order to speed up loading.

Load Selection:

SegmentID ClasificacionTabla CargarTabla Resultado_CargarTabla
0 0 1 1
1 5 0 NULL
2 5 0 NULL
30 50 1 1

The 'SegmentID' field is used to indicate that a table is segmented. Normal segmentation strategy:

  • If a table is not segmented, this field should contain the value 0.
  • If it is segmented, the first segment should contain the value 1.
  • The last segment should contain a high value, for example 100. This will allow you to add intermediate segments when necessary, that is, as the table grows.
Insert Bulk Bestia only uses this field to obtain the order in which the segments will be loaded.

The field 'ClassificacionTabla' is used to perform a grouping of tables. For example, the value 1 can indicate that they are master tables, the value 2 can indicate that they are temporary tables, which you rarely want to copy from one database to another, and so on. Generally, the last segment of the segmented tables is usually given a different classification than the rest of the segments. In this way, it can be decided that only the data of the last segment of the large tables is copied, since only the last data entered is of interest for the tests and not the entire table.

Insert Bulk Bestia will put a 0 in this field when new tables appear since the last time it was run. Thus, it is convenient to review the classification of the tables each time the structure of the database is modified.

This field, 'ClassificacionTabla', is the one used in 'Classifications of Tables to Load' of the Panel Carga of the user interface. There, the user will be able to indicate which tables he wishes to copy from one database to another. In this example, you might indicate that you want to copy tables with rank 0 and 50.

The field 'CargarTabla' is filled by Insert Bulk Bestia to indicate which tables and/or segments are going to be copied. It is done in step 10 indicated in the section Algoritmo de Trasvase de Datos. It will mark the tables to be loaded with a 1 and the rest with a 0. This step 10 can be ignored if the option 'Selección de Tablas por Campo "Clasificación"' of the Panel Carga of the user interface. So you could use ibb.Cargar_Tablas_Acciones_Programables_Destino_Iniciales to perform this action.

The field 'Resultado_CargarTabla' will contain the result of the data copy. A 1 means the copy was successful. A value other than 1 indicates the opposite. This field is the one used by Insert Bulk Bestia to perform the retries indicated in 'Nº de Reintentos para la Carga de Datos' of the Panel Carga of the user interface.

ibb.Cargar_Tablas_Actualiza

Stored procedure created in the source and target databases. It is the stored procedure used to update the table ibb.Cargar_Tablas.

Although the user can modify it to adapt it to their specific needs, it is recommended not to do so and use other ways to do so. Insert Bulk Bestia offers the following three stored procedures for this purpose.

ibb.Cargar_Tablas_Acciones_Programables_Origen

Stored procedure created only in the source database. It is executed in step 3 of Algoritmo de Trasvase de Datos.

The objective of the actions programmed in this stored procedure must be the automatic "fine" adjustment of the SELECT and WHERE for the source database, as well as the automatic classification of the tables.

ibb.Cargar_Tablas_Acciones_Programables_Destino_Iniciales

Stored procedure created in the source and target databases. It is executed in step 9 of the Algoritmo de Trasvase de Datos.

The objective of the actions programmed in this stored procedure should be the automatic "fine" adjustment of the SELECT and WHERE for the target database, as well as the automatic classification of the tables. Another utility may be to adapt the content of certain views that contain information associated with the target instance. It can also be used to mark those tables that you want to load when you want Insert Bulk Bestia not to carry out this action.

As already described in previous sections, the objective of creating this stored procedure in the source, if it does not exist previously, is for the user to modify its content in said database, in such a way that when transferring the structure to the destination , in that structure are the changes made, and therefore you do not have to waste time programming this stored procedure over and over again.

ibb.Cargar_Tablas_Acciones_Programables_Destino_Finales

Stored procedure created in the source and target databases. It is executed in step 15 of the Algoritmo de Trasvase de Datos.

The objective of the actions programmed in this stored procedure should be the automatic "fine" adjustment of the copied data, such as hiding confidential information, "path" adjustments when changing environments, and so on.

As already described in previous sections, the objective of creating this stored procedure in the source, if it does not exist previously, is for the user to modify its content in said database, in such a way that when transferring the structure to the destination , in that structure are the changes made, and therefore you do not have to waste time programming this stored procedure over and over again.

ibb.Cargar_Tablas_Triggers

Table created in the target database. It indicates the existing triggers in the database and the status they are in.

Field Type NULL Check Description
SchemaName sysname NOT NULL Trigger Table Scheme
TableName sysname NOT NULL Trigger Table Name
TriggerName sysname NOT NULL Trigger Name
SchemaTableName CONCAT(QUOTENAME(SchemaName), N'.', QUOTENAME(TableName)) PERSISTED NOT NULL Trigger Table Scheme and Name
Is_Disabled bit NOT NULL Flag that indicates if the Trigger is disabled.

Since a copy of data will be made from a Source Database to a Destination Database, whose structure is assumed to be a replica of the first, it will also be assumed that the data to be copied is already validated. Therefore, disabling data validations should not be a problem.

This will allow a data load to be carried out, so to speak, out of order, that is, without taking into account the logical structure of the database, which would imply loading in a certain order.

Subsequently, the validations will be reestablished, so that the subsequent modification of the data must follow the rules established in the database.

Thus, in 'ibb.Cargar_Tablas_Triggers' the current state of the triggers is stored before making the copy. Later all those that were enabled will be disabled, the data will be loaded and the triggers that were previously enabled will be re-enabled.

ibb.Cargar_Tablas_ForeingKeys

Table created in the target database. Stores various information regarding the Foreing Keys.

Field Type NULL Check Description
SchemaName_Origen sysname NOT NULL Scheme of the FK Origin Table
TableName_Origen sysname NOT NULL Name of the FK Origin Table
SchemaTableName_Origen CONCAT(QUOTENAME(SchemaName_Origen), N'.', QUOTENAME(TableName_Origen)) PERSISTED NOT NULL Scheme and Name of the FK Origin Table
SchemaName_Destino sysname NOT NULL Scheme of the FK Target Table
TableName_Destino sysname NOT NULL Name of the FK Target Table
SchemaTableName_Destino CONCAT(QUOTENAME(SchemaName_Destino), N'.', QUOTENAME(TableName_Destino)) PERSISTED NOT NULL Scheme and Name of the FK Target Table
ForeingKeyName sysname NOT NULL Name of the FK
DropForeingKey nvarchar(max) NOT NULL Statement to eliminate the FK
DisableForeingKey nvarchar(max) NOT NULL Statement to disable FK
CreateForeingKey nvarchar(max) NOT NULL Statement pattern for creating the FK
DeleteRecordsForeingKey nvarchar(max) NOT NULL Sentence to eliminate the records from the Origin Table that prevent raising the FK
is_not_trusted bit NOT NULL Cargar_Tablas_ForeingKeys_is_not_trusted_DEF DEFAULT (0) sys.foreign_keys
is_disabled bit NOT NULL Cargar_Tablas_ForeingKeys_is_disabled_DEF DEFAULT (0) sys.foreign_keys
Borrada bit NOT NULL Cargar_Tablas_ForeingKeys_Borrada_DEF DEFAULT (0) Flag that indicates that the FK has been deleted since the last time it was run Insert Bulk Bestia

In order to make a data copy as efficient and simple as possible, the Foreing Keys (FKs) are eliminated, thus allowing the "random" loading of tables, that is, without having to follow a logical order according to to business logic.

The statements to delete, disable and create the FKs are stored in this table. The sentence for the creation of the stored FKs is not such, but a pattern. The '***' will be replaced by WITH NOCHECK or WITH CHECK as indicated by the 'is_not_trusted' and/or 'is_disabled' fields.

The 'DeleteRecordsForeingKey' field contains the statement to be executed to delete the records from the source table that prevent raising the FK. This statement is executed if the option 'Eliminar Registros Foreing Keys' is checked in the Panel Destino. If not, and there are records that prevent its lifting, the FK will not be created, so the next time Insert Bulk Bestia is executed it will put a 1 in the 'Borrada' field.

When Insert Bulk Beast is run a second time and it encounters an FK whose field 'Borrada' has the value 1, the FK is no longer in the database. Insert Bulk Bestia will proceed to its creation since it is not able to determine if the FK has been effectively eliminated by the user, or it does not appear because in the previous execution it could not be lifted due to the existence of orphan records.

ibb.Cargar_Tablas_Constraints

Table created in the target database. Stores various information regarding Constraints.

Field Type NULL Check Description
SchemaName sysname NOT NULL Schema of the Constraint Table
TableName sysname NOT NULL Name of the Constraint Table
SchemaTableName CONCAT(QUOTENAME(SchemaName), N'.', QUOTENAME(TableName)) PERSISTED NOT NULL Schema and Name of the Constraint Table
ConstraintName sysname NOT NULL Restriction Name
HabilitaConstraint nvarchar(max) NOT NULL Statement pattern to enable Constraint
is_not_trusted bit NOT NULL Cargar_Tablas_Constraints_is_not_trusted_DEF DEFAULT (0) sys.check_constraints
is_disabled bit NOT NULL Cargar_Tablas_Constraints_is_disabled_DEF DEFAULT (0) sys.check_constraints
Borrada bit NOT NULL Cargar_Tablas_Constraints_Borrada_DEF DEFAULT (0) Flag that indicates that the Constraint has been removed since the last time it was executed Insert Bulk Bestia

In order to make a copy of data as efficient as possible, constraints are disabled.

The statements to enable/disable the constraints are stored in this table. The statement for enabling the stored constraints is not such, but a pattern. The '***' will be replaced by NOCHECK NOCHECK, NOCHECK CHECK or CHECK CHECK as indicated in the fields 'is_not_trusted' and/or 'is_disabled'.

When Insert Bulk Bestia is executed for the second time, it will put a 1 in the 'Borrada' field for all those constraints that have disappeared. As it is not able to determine if the Restriction has been effectively removed by the user, or it does not appear because it could not be lifted in the previous run, Insert Bulk Bestia will try to lift it.

Also, Insert Bulk Bestia will score in the table ibb.Log_Errores certain messages for those constraints that don't seem very reasonable.

ibb.Cargar_Tablas_Indices

Table created in the target database. Stores various information regarding the Indices.

Field Type NULL Check Description
SchemaName sysname NOT NULL Index Table Schema
TableName sysname NOT NULL Index Table Name
SchemaTableName CONCAT(QUOTENAME(SchemaName), N'.', QUOTENAME(TableName)) PERSISTED NOT NULL Index Table Schema and Name
IndexName sysname NOT NULL Index Name
CreateIndex nvarchar(max) NOT NULL Statement for the creation of the Index
DopIndex nvarchar(max) NOT NULL Statement for the removal of the Index
DeshabilitarIndex nvarchar(max) NOT NULL Statement for disabling a Unique type constraint.
HabilitarConstraint nvarchar(max) NOT NULL Statement pattern for enabling a Unique type constraint.
RebuildIndex nvarchar(max) NOT NULL Statement for the reconstruction of an Index.
RebuildAllIndex nvarchar(max) NOT NULL Statement for the reconstruction of all the Indexes of the Table.
type tinyint NOT NULL Cargar_Tablas_Indices_type_DEF DEFAULT (0) sys.indexes
type_desc nvarchar(60) NOT NULL Cargar_Tablas_Indices_type_desc_DEF DEFAULT (N'HEAP') sys.indexes
is_primary_key bit NOT NULL Cargar_Tablas_Indices_primary_DEF DEFAULT (0) sys.indexes
is_unique_constraint bit NOT NULL Cargar_Tablas_Indices_unique_constraint_DEF DEFAULT (0) sys.indexes
is_unique bit NOT NULL Cargar_Tablas_Indices_unique_DEF DEFAULT (0) sys.indexes
is_disabled bit NOT NULL Cargar_Tablas_Indices_disabled_DEF DEFAULT (0) sys.indexes
no_recompute bit NOT NULL Cargar_Tablas_Indices_no_recompute_DEF DEFAULT (0) sys.stats
is_tracked_by_cdc bit NOT NULL Cargar_Tablas_Indices_is_tracked_by_cdc_DEF DEFAULT (0) sys.tables
is_tracked_by_ct bit NOT NULL Cargar_Tablas_Indices_is_tracked_by_ct_DEF DEFAULT (0) sys.change_tracking_tables
Borrado bit NOT NULL Cargar_Tablas_Indices_Borrado_DEF DEFAULT (0) Flag that indicates that the Index has been removed since the last time Insert Bulk Bestia was executed

In order to make a copy of data as efficient as possible, the indexes are dropped.

The statements to drop and create indexes and constraints type unique are stored in this table.

When Insert Bulk Bestia is executed for the second time, it will put a 1 in the 'Borrado' field for all those Indices that have disappeared. As it is not able to determine if the Index has actually been deleted by the user, or it does not appear because it could not be created in the previous run, Insert Bulk Bestia will try to create the index.

Also, Insert Bulk Bestia will score in the table ibb.Log_Errores certain messages for those Indexes that are disabled.

As mentioned above, Insert Bulk Bestia follows certain logic to decide whether a certain index should be removed:

  • It does not eliminate the cluster indexes of those tables in which the field 'DataKb' of 'ibb.Cargar_Tablas' is greater than the threshold indicated in 'Minimum size (KB) not to eliminate Cluster index' of Panel Carga.
  • For those tables that have change tracking enabled by CDC, it does not delete any index used by change tracking, nor its cluster index.
    CDC change tracking: The database has in 'sys.databases' the field 'is_cdc_enabled' to 1 and the table has in 'sys.tables' the field 'is_tracked_by_cdc' to 1.
  • For those tables that have change tracking enabled by CT, it does not delete the Primary Key, nor its cluster index. CT Change Tracking: Table appears in 'sys.change_tracking_tables'
  • It does not remove any index from those tables that Insert Bulk Beast has not previously truncated. Insert Bulk Bestia truncates a table previously when:
    • You want to load all the table segments and the table is not a memory-optimized table.
    • If the segments to be loaded span all the records contained in the table and the table is not a memory-optimized table.

For the recreation / reconstruction of the indexes, Insert Bulk Bestia has certain options. All of them are in the Panel Carga

Use TEMPDB to Create Indexes

Use the Tempdb database to sort the indexes when they are built.

This parameter, together with the following one, are key to improving the continuity of the indexes, that is, improving the final performance of the Database.

If this option is checked, the time for the construction of the indexes is reduced at the cost of using space in the Tempdb Database, so it must be appropriately dimensioned, and it also improves the continuity of the same.

This option should not be checked when the size of this database is critical or it is used by other databases whose use is critical and you do not want to make intensive use of this database.

Creation of Indices with MAXDOP = 0 or MAXDOP = 1

Use the maximum degree of parallelism in the creation of indexes or serialize said process.

If this option is checked, the time required to build the indexes is reduced by using all the available CPUs. However, the continuity of the indices worsens, worsening the final performance of the Database.

As an alternative to this configuration, it is recommended to set this parameter to False and use parallel index construction. In this way, each CPU will be used to build a single index, but as many indexes as available CPUs on the server will be built in parallel.

Creation of Parallel Indices

Use the Insert Bulk Bestia integrated libraries in SQL Server to execute parallel processes. Certain types of license may not allow for this.

If this option is checked, the indexes are created in parallel making a very intensive use of CPU and Disks. In the destination database the stored procedure 'ibb.Cargar_Tablas_Create_Index_Parallelo' is created with which this process is carried out.

The stored procedure is an example of how to use libraries to execute T-SQL statements in parallel. In this stored procedure, a job is created that creates the indexes of a certain table in a certain order, and then as many jobs are launched in parallel as there are cores available for the instance.

NOTE: If there is a problem with SQLCLR, such as freezing or entering an infinite loop, the AppDomain can be downloaded with:

ALTER ASSEMBLY[SQLServerAsyncExecProcedures] WITH PERMISSION_SET = EXTERNAL_ACCESS
ALTER ASSEMBLY[SQLServerAsyncExecProcedures] WITH PERMISSION_SET = UNSAFE

An example could be: Error No. 6522, whose message contains 'SQLServerAsyncExecProcedures.MetodosAuxiliares.TryAcquireLock'. This error indicates that the to-do dictionary is locked, so a lock cannot be acquired.

In the following chapter, the 'SQLServerAsyncExecProcedures' library is described in detail and an example of the use of these libraries is detailed later.