It is a very common requirement to perform a task that inserts, updates, or deletes many rows in a single operation. For example, an application reads a data file from a device and massages it into INSERT statements for posting to a database table, or a replication service processes batches of changes between satellite and parent databases. Many similar statements are posted within a single transaction, usually by means of a prepared statement and replaceable runtime parameters.
A statement for a batch insert may look something like this:INSERT INTO DATATABLE(DATA1, DATA2, DATA3, DATA4, DATA5,....more columns) VALUES ('x', 'y', 'z', 99, '2004-12-25', ....more values);
Using parameters, it looks like this:INSERT INTO DATATABLE(DATA1, DATA2, DATA3, DATA4, DATA5,....more columns) VALUES (?, '?', '?', ?, ?, ....more values);
Often, a stored procedure will be the most elegant way to set up a repeatable batch operation, especially if there is a requirement to transform data en route to the database table or to insert rows into multiple tables.
Preventing Slowdown on Batches
Two behaviors can cause huge batches to get progressively slower during execution.
A strategy for mitigating batch slowdown when other users are not accessing the input table is to deactivate the secondary indexes. This is simple to do:ALTER INDEX index-name INACTIVE;
When the large batch operation is finished, reactivate and rebuild the indexes:
ALTER INDEX index-name ACTIVE;
Partitioning Large Batches
Unfortunately, you cannot disable the indexes that enforce primary and foreign keys without dropping the actual constraints. Often, it will be impracticable to drop constraints because of chained dependencies. For this reason and others, judicious partitioning of the task is recommended to help offset both the deterioration of index balance and the generation of an unnecessarily large number of new database pages.
When posting large numbers of inserts in batch, it is preferable to partition them into groups and commit work about every 5,000–10,000 rows. The actual size of an optimal partition will vary according to the size of the row and the database page size. Optimal groups may be smaller or larger than this range.
When you need to partition huge batches, a stored procedure is often the way to go. You can make use of local counter variables, event messages, and return values to keep completing procedure calls in sync with the calling client.
Firebird Related Interview Questions
|RDBMS Interview Questions||MySQL Interview Questions|
|Linux Interview Questions||Mac OS X Deployment Interview Questions|
|Windows Administration Interview Questions||Windows Server 2003 Interview Questions|
|SQL Interview Questions||NoSQL Interview Questions|
|Advanced C++ Interview Questions|
All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.