Possible issues while snapshot is refreshed and then trigger is called which is inserting million records in delta table

by deepika
(delhi)



Problem is

At data warehouse end , fast snapshot is getting refreshed and about a million records will be deleted.

Based on snapshot , there is a trigger which inserts those million records in delta table.

I want to know what are the parameters which can be checked at database level for it's smooth running

This is not an easy question to answer as there is not much information to work with, however specific things to check would be the size of the undo tablespace, the degree of parallelism and the size of the redo logs.

The reasons for this are as follows: if you're deleting a million rows and then inserting those million rows into another table you're going to need a lot of undo space (so that the changes can be reversed) and a lot of redo space (so that the tables can be recovered in the event of a disaster and subsequent database restore), and the degree of parallelism so that the whole job runs faster. In terms of deletes and inserts remember that the associated indexes will also need to be maintained which also requires undo and redo. As there will be a large amount of redo generated you won't want the redo logs to be too small otherwise there will be a lot of log switching which could slow down the operation and you'll also need to ensure you have enough disk space to hold the redo logs that Oracle will generate.

Having said that, it begs the question is there a better way to do this? We can't answer that question as such because we don't know enough about your application and your database. However there are a few things you could consider.
  • it may be possible to partition your data so that you could a partition exchange - swapping an empty one for the one with all the data. That way you save on all the undo and redo required for 1 million deletes and 1 million inserts.
  • Instead of deleting the rows you don't want you could create a new table with rows you want to keep, then create the indexes on the new table and swap it for the original table. This is similar to the first option but still requires redo for the inserts. The table and index(es) can be created with nologging specified to prevent redo logging although this means that they could not be recovered after a restore. The index(es) could be also be created in parallel to make the whole process faster
.

Looking for top quality Oracle training? See here for details.

Click here to post comments

Join in and write your own page! It's easy to do. How? Simply click here to return to Oracle Questions.