We will be given 100 200GB new filesystems and we will leave 10GB empty in each, so only 190GB per filesystem will be used.
The old naming convention is "/oracleXXX/oradata/" and the new will be "/oraerpXXX/oradata".
A manual approach of this case is almost impossible to be implemented for a database of this size, even if we do not consider the "physical reads" clause.
To order our datafiles by physical reads (or writes) we can use the following script, which uses AWR snapshots (10g+):
SELECT A.FILENAME, D.TABLESPACE_NAME, D.CONTENTS, E.BYTES, NVL (SUM (A.PHYRDS), 0) READS, NVL (SUM (A.PHYWRTS), 0) WRITES FROM DBA_HIST_FILESTATXS A, V$TABLESPACE C, DBA_TABLESPACES D, DBA_DATA_FILES E WHERE A.SNAP_ID = 15400 --LAST SNAPSHOT AND A.TS# = C.TS# AND C.NAME = D.TABLESPACE_NAME AND A.FILENAME = E.FILE_NAME GROUP BY A.FILENAME, D.TABLESPACE_NAME, D.CONTENTS, E.BYTES UNION SELECT A.FILENAME, D.TABLESPACE_NAME, D.CONTENTS, E.BYTES, NVL (SUM (A.PHYRDS), 0) READS, NVL (SUM (A.PHYWRTS), 0) WRITES FROM DBA_HIST_TEMPSTATXS A, V$TABLESPACE C, DBA_TABLESPACES D, DBA_TEMP_FILES E
AND A.TS# = C.TS# AND C.NAME = D.TABLESPACE_NAME AND A.FILENAME = E.FILE_NAME GROUP BY A.FILENAME, D.TABLESPACE_NAME, D.CONTENTS, E.BYTES ORDER BY 5 DESC;
WHERE A.SNAP_ID = 15400 --LAST SNAPSHOT
Or we may use a script reading GV$FILESTAT, GV$TEMPSTAT, which keep statistics since instance startup:
SELECT B.FILE_NAME,
C.TABLESPACE_NAME,
C.CONTENTS,
B.BYTES,
NVL (SUM (A.PHYRDS), 0) READS,
NVL (SUM (A.PHYWRTS), 0) WRITES
FROM GV$FILESTAT A, DBA_DATA_FILES B, DBA_TABLESPACES C
WHERE A.FILE# = B.FILE_ID AND B.TABLESPACE_NAME = C.TABLESPACE_NAME
GROUP BY B.FILE_NAME,
C.TABLESPACE_NAME,
C.CONTENTS,
B.BYTES
UNION
SELECT B.FILE_NAME,
C.TABLESPACE_NAME,
C.CONTENTS,
B.BYTES,
NVL (SUM (A.PHYRDS), 0) READS,
NVL (SUM (A.PHYWRTS), 0) WRITES
FROM GV$TEMPSTAT A, DBA_TEMP_FILES B, DBA_TABLESPACES C
WHERE A.FILE# = B.FILE_ID AND B.TABLESPACE_NAME = C.TABLESPACE_NAME
GROUP BY B.FILE_NAME,
C.TABLESPACE_NAME,
C.CONTENTS,
B.BYTES
ORDER BY 5 DESC; --6 for WRITES
First, we create these tables:
CREATE TABLE HELPDESK.DISKS
(
DISK_ID NUMBER,
SIZE_MAX NUMBER,
SIZE_USED NUMBER,
PATH_NAME VARCHAR2 (513 BYTE)
)
TABLESPACE HELPDESK
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (INITIAL 128 K
NEXT 128 K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT)
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
CREATE TABLE HELPDESK.DFILES
(
DISK_ID NUMBER,
NAME VARCHAR2 (513 BYTE),
NAME_NEW VARCHAR2 (513 BYTE),
BYTES NUMBER,
TABLESPACE_NAME VARCHAR2 (30 BYTE),
CONTENTS VARCHAR2 (9 BYTE),
READS NUMBER,
WRITES NUMBER,
BACKUP_COM VARCHAR2 (2000 BYTE),
ALTER_COM VARCHAR2 (2000 BYTE)
)
TABLESPACE HELPDESK
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (INITIAL 128 K
NEXT 128 K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT)
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
GRANT ALL ON HELPDESK.DISKS TO SYSTEM;
GRANT ALL ON HELPDESK.DFILES TO SYSTEM;
In DISKS table, we will keep record of how much size of each new filesystem will be used (SIZE_USED).
In DFILES, we will get the new datafiles' location (NAME_NEW) and we will, automatically, create the commands needed for NetBackup to restore them to their new locations (BACKUP_COM) and the "ALTER DATABASE RENAME FILE [old datafile location] TO [new datafile location]" commands (ALTER_COM).
We create and run the following procedure:
CREATE OR REPLACE PROCEDURE SYSTEM.DISK_GROUPING
IS
--ORDERING IN THESE CURSORS IS NEEDED, IF YOU WANT TO TAKE INTO ACCOUNT THE PHYSICAL READS
CURSOR C1
IS
SELECT NAME, BYTES, TABLESPACE_NAME
FROM HELPDESK.DFILES
ORDER BY READS DESC;
CURSOR C2
IS
SELECT DISK_ID,
SIZE_MAX,
SIZE_USED,
PATH_NAME
FROM HELPDESK.DISKS
ORDER BY SIZE_USED;
BEGIN
EXECUTE IMMEDIATE 'TRUNCATE TABLE HELPDESK.DISKS';
FOR I IN 1 .. 100
LOOP --CREATE 100 DISKS WITH 190GB (204010946560 bytes) USABLE SPACE
INSERT INTO HELPDESK.DISKS (DISK_ID,
SIZE_MAX,
SIZE_USED,
PATH_NAME)
VALUES (I,
204010946560,
0,
'/oraerp' || I || '/oradata/');
END LOOP;
EXECUTE IMMEDIATE 'TRUNCATE TABLE HELPDESK.DFILES';
INSERT INTO HELPDESK.DFILES (NAME,
TABLESPACE_NAME,
CONTENTS,
BYTES,
READS,
WRITES) --PUT DATAFILES IN DFILES TABLE
SELECT B.FILE_NAME,
C.TABLESPACE_NAME,
C.CONTENTS,
B.BYTES,
NVL (SUM (A.PHYRDS), 0) READS,
NVL (SUM (A.PHYWRTS), 0) WRITES
FROM GV$FILESTAT A, DBA_DATA_FILES B, DBA_TABLESPACES C
WHERE A.FILE# = B.FILE_ID AND B.TABLESPACE_NAME = C.TABLESPACE_NAME
GROUP BY B.FILE_NAME,
C.TABLESPACE_NAME,
C.CONTENTS,
B.BYTES;
INSERT INTO HELPDESK.DFILES (NAME,
TABLESPACE_NAME,
CONTENTS,
BYTES,
READS,
WRITES) --PUT TEMPFILES IN DFILES TABLE
SELECT B.FILE_NAME,
C.TABLESPACE_NAME,
C.CONTENTS,
B.BYTES,
NVL (SUM (A.PHYRDS), 0) READS,
NVL (SUM (A.PHYWRTS), 0) WRITES
FROM GV$TEMPSTAT A, DBA_TEMP_FILES B, DBA_TABLESPACES C
WHERE A.FILE# = B.FILE_ID AND B.TABLESPACE_NAME = C.TABLESPACE_NAME
GROUP BY B.FILE_NAME,
C.TABLESPACE_NAME,
C.CONTENTS,
B.BYTES;
COMMIT;
FOR DFILE IN C1
LOOP
FOR DISK IN C2
LOOP
IF DFILE.BYTES < DISK.SIZE_MAX - DISK.SIZE_USED
THEN
UPDATE HELPDESK.DFILES
SET DISK_ID = DISK.DISK_ID,
NAME_NEW =
DISK.PATH_NAME
|| SUBSTR (DFILE.NAME,
INSTR (DFILE.NAME,
'/',
3,
3)
+ 1)
WHERE NAME = DFILE.NAME;
UPDATE HELPDESK.DFILES
SET BACKUP_COM = 'change ' || NAME || ' to ' || NAME_NEW,
ALTER_COM =
'ALTER DATABASE RENAME FILE '''
|| NAME
|| ''' TO '''
|| NAME_NEW
|| ''';'
WHERE NAME = DFILE.NAME;
UPDATE HELPDESK.DISKS
SET SIZE_USED = SIZE_USED + DFILE.BYTES
WHERE DISK_ID = DISK.DISK_ID;
COMMIT;
EXIT;
END IF;
END LOOP;
END LOOP;
END DISK_GROUPING;
/
Now, querying DISKS, we see:
DISK_ID | SIZE_MAX | SIZE_USED | PATH_NAME |
1 | 204,010,946,560 | 155,508,015,104 | /oraerp1/oradata/ |
2 | 204,010,946,560 | 153,190,662,144 | /oraerp2/oradata/ |
3 | 204,010,946,560 | 151,382,917,120 | /oraerp3/oradata/ |
4 | 204,010,946,560 | 164,322,344,960 | /oraerp4/oradata/ |
... | ... | ... | ... |
98 | 204,010,946,560 | 151,424,925,696 | /oraerp98/oradata/ |
99 | 204,010,946,560 | 151,666,032,640 | /oraerp99/oradata/ |
100 | 204,010,946,560 | 152,240,717,824 | /oraerp100/oradata/ |
So, for instance, in the "/oraerp3/oradata/" 141GB of total 200GB will be used.
Querying DFILES (SELECT DISK_ID, NAME, NAME_NEW, READS FROM HELPDESK.DFILES ORDER BY DISK_ID, READS DESC;), we see:
DISK_ID | NAME | NAME_NEW | READS |
1 | /oracle89/oradata/ar_payment_schedules_all_011.dbf | /oraerp1/oradata/ar_payment_schedules_all_011.dbf | 2,235,050,034 |
1 | /oracle36/oradata/xxra_interfacex05.dbf | /oraerp1/oradata/xxra_interfacex05.dbf | 26,396,759 |
1 | /oracle8/oradata/user_data18.dbf | /oraerp1/oradata/user_data18.dbf | 20,936,923 |
1 | /oracle87/oradata/user_index81.dbf | /oraerp1/oradata/user_index81.dbf | 19,653,078 |
1 | /oracle78/oradata/ar_receivable_applications_all_007.dbf | /oraerp1/oradata/ar_receivable_applications_all_007.dbf | 9,722,306 |
1 | /oracle92/oradata/ax_sle_lines_078.dbf | /oraerp1/oradata/ax_sle_lines_078.dbf | 7,687,651 |
1 | /oracle78/oradata/czx26.dbf | /oraerp1/oradata/czx26.dbf | 2.474.694 |
... | ... | ... | ... |
Observe how datafiles are spread in each filesystem, according to physical reads.
CAUTION: If you have, for instance, 2 or more datafiles, e.g. named "user_data18.dbf" in different old filesystems, you must check these datafiles will not be placed in the same new filesystem.
This procedure does not guarantee no overlaps will occur.
No comments:
Post a Comment