Wednesday, December 9, 2009

Online table move - reorganisation using DBMS_REDEFINITION


Let's say you have little or no downtime available to move segments to a new tablespace.
Or, perhaps, you have very big segments in a data dictionary tablespace with very small extents.
In a tablespace, I have some segments a few GB each, with thousands KB extents.
If you execute a DDL statement, such as ALTER TABLE MOVE [ATM], TRUNCATE, DROP at those segments,
you will notice your session to take A LOT of time to complete, with "Row Cache Lock" and "DFS Lock Handle" waits.
If you trace your session, you will discover a lot of activity to fet$ and uet$ tables.
Since the tablespace is dictionary managed, those thousand of extents must be deleted from uet$ [used extents dictionary table]
and inserted into fet$ [free extents dictionary table] and then released.


A quick way for these situations is to use the DBMS_REDEFINITION package.
Let's say you want to move a table [ORIG TABLE], under schema [USER] from one tablespace [TBS1] to another [TBS2].


Do not forget to give the necessary quota to [USER] in [TBS2].


First of all, check if [ORIG TABLE] can be used by the redefinition process, by executing:

EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('[USER]','[ORIG TABLE]');

If you get an error, is probably because [ORIG TABLE] has no primary key.
You may use ROWID, so the following execution should return no error:

EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('[USER]','[ORIG TABLE]',2);

Now, create an interim table [INT TABLE], an empty, exact copy of [ORIG TABLE].
You may use a "Create As Select" [CTAS] statement with "where 1=2" clause, but know that if a column has a default value set, it will not be transferred.
The best way is to get the creation script of [ORIG TABLE] and change it accordingly to create the [INT TABLE].
You can get it from a tool like TOAD, or by executing the following command:

SELECT DBMS_METADATA.GET_DDL('TABLE','[ORIG TABLE]','[USER]') FROM DUAL;

You should not transfer the "Big size, Small extent" problem to [INT TABLE].
A rule of thumb could be to use the 1/10 of current table size for INITIAL EXTENT and the 1/100 for NEXT EXTENT:

SELECT SEGMENT_NAME, BYTES/1024 SIZE_K,CEIL(BYTES/1024/10) INITIAL_K, CEIL(BYTES/1024/100) NEXT_K
FROM DBA_SEGMENTS
WHERE SEGMENT_NAME = '[ORIG TABLE]'
AND OWNER = '[USER]';

At this point, you have [ORIG TABLE] in tablespace [TBS1] and an empty [INT TABLE] in tablespace [TBS2].


Start the redefinition process, by issuing the following:
If your table is huge, then parallelize first:

ALTER SESSION FORCE PARALLEL DML PARALLEL [#];
ALTER SESSION FORCE PARALLEL QUERY PARALLEL [#];


Then for PK method:

EXEC DBMS_REDEFINITION.START_REDEF_TABLE(uname=>'[USER]',orig_table=>'[ORIG TABLE]',int_table=>'[INT TABLE]');

Or for ROWID method:

EXEC DBMS_REDEFINITION.START_REDEF_TABLE(uname=>'[USER]',orig_table=>'[ORIG TABLE]',int_table=>'[INT TABLE]',options_flag=>2);


If your database is 9i, you should create the appropriate triggers, constraints, privileges and indexes manually to [INT TABLE].
If your database is 10g or higher, you may use COPY_TABLE_DEPENDENTS procedure to transfer constraints, triggers and privileges to [INT TABLE]:

DECLARE
   ERR   PLS_INTEGER;
BEGIN
   DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS (UNAME           => '[USER]',
                                            ORIG_TABLE      => '[ORIG TABLE]',
                                            INT_TABLE       => '[INT TABLE]',
                                            COPY_INDEXES    => 0,
                                            NUM_ERRORS      => ERR,
                                            IGNORE_ERRORS   => TRUE);
END;
/

Better use creation scripts for indexes, and not this procedure.
Set TRUE for ignore_errors, because the NOT NULL constraints will be already transfered and setting it to FALSE will result to an error.
Just after the execution, query the DBA_REDEFINITION_ERRORS view for errors not related to NOT NULL constraints and investigate them.


Now, you may finish the redefinition process:

EXEC DBMS_REDEFINITION.FINISH_REDEF_TABLE(uname=>'[USER]',orig_table=>'[ORIG TABLE]',int_table=>'[INT TABLE]');

Querying the DBA_REDEFINITION_OBJECTS view should return no rows.


Now you should have [ORIG TABLE] in tablespace [TBS2] and [INT TABLE] under tablespace [TBS1], which you can safely DROP.


Know this, DML statements [INSERT, UPDATE, DELETE] are permitted during the redefinition process.
The committed changes will be written to [ORIG TABLE] only, until you execute the FINISH_REDEF_TABLE and synchronize both tables.
The goal, which is actually achieved, is that the committed changes will always be written to [ORIG TABLE], during and after the redefinition process.
BUT any uncommitted transactions will result to a "Wait for Table Lock" wait to the session running the FINISH_REDEF_TABLE procedure.
ONLY when these transactions are committed the FINISH_REDEF_TABLE will be successfully completed and, conclusively, the whole redefinition process.