Monday, July 30, 2012

Script to check if AWR snapshot collection has stopped

In one database, after the archive log destination got full, snapshot collection stopped.
There was no way to know this had happened. Only in the MMON trace file I got the following message, for which there is no reference in Metalink:
Unable to schedule a MMON slave at: Auto Flush Main 1
  Slave action has been temporarily suspended
    - Slave action had prior policy violations.
  Unknown return code: 101

So, I had 2 choices. Either to restart the database or to kill MMON and MMNL processes.
I chose the latter.
You will get these in your alert log, when they restart:
Restarting dead background process MMON
Restarting dead background process MMNL
Mon Jul 30 08:18:24 2012
MMON started with pid=18, OS id=10223826 
Mon Jul 30 08:18:24 2012
MMNL started with pid=19, OS id=15204612 

Now snapshot collection is restored.

If you compare the time passed from the last snapshot collected and the interval you have set for snapshot collection, you will find out if the collection is halted.
For instance, if your interval is set to 1 hour (the default) and the last snapshot was collected more than 1 hour ago, then you may assume something is wrong.
The following script will return 0, if [time from last snapshot] < [snapshot interval] and there is no problem and 1 otherwise.
SELECT (CASE WHEN NUMTODSINTERVAL ( (SYSDATE - MAX(CAST(END_INTERVAL_TIME AS DATE))), 'DAY') < SNAP_INTERVAL THEN 0 ELSE 1 END) "RESULT"
FROM DBA_HIST_SNAPSHOT, DBA_HIST_WR_CONTROL
GROUP BY SNAP_INTERVAL;

If you want to create a database user-defined metric to monitor this, you should use a function.
The CASE statement confuses OEM and rejects the query.
First, connect as SYSDBA and grant SELECT on DBA_HIST_SNAPSHOT and DBA_HIST_WR_CONTROL to SYSTEM.
The function is:
CREATE OR REPLACE FUNCTION SYSTEM.CHECKAWR RETURN NUMBER
IS
V_RESULT NUMBER;
BEGIN
SELECT (CASE WHEN NUMTODSINTERVAL ( (SYSDATE - MAX(CAST(END_INTERVAL_TIME AS DATE))), 'DAY') < SNAP_INTERVAL THEN 0 ELSE 1 END) "RESULT" INTO V_RESULT
FROM DBA_HIST_SNAPSHOT, DBA_HIST_WR_CONTROL
GROUP BY SNAP_INTERVAL;

RETURN V_RESULT;
END;
/

Now, in the "SQL Query" box, put:
SELECT SYSTEM.CHECKAWR FROM DUAL; 

Tuesday, July 24, 2012

RAC: Log switch frequency script

The following script reports how much time passed between log switches in a time period.
Although it's written to provide information per instance in RACs, it will also work in Single Instances.
SELECT C.INSTANCE,
         C.THREAD#,
         B.SEQUENCE# "START SEQUENCE",
         TO_CHAR (B.FIRST_TIME, 'DD-MM-YYYY HH24:MI:SS') "START TIME",
         A.SEQUENCE# "END SEQUENCE",
         TO_CHAR (A.FIRST_TIME, 'DD-MM-YYYY HH24:MI:SS') "END TIME",
         TO_CHAR (
            TRUNC (SYSDATE)
            + NUMTODSINTERVAL ( (A.FIRST_TIME - B.FIRST_TIME) * 86400,
                               'SECOND'),
            'HH24:MI:SS')
            DURATION
    FROM V$LOG_HISTORY A, V$LOG_HISTORY B, V$THREAD C
   WHERE     A.SEQUENCE# = B.SEQUENCE# + 1
         AND A.THREAD# = C.THREAD#
         AND B.THREAD# = C.THREAD#
         AND A.FIRST_TIME BETWEEN TO_DATE ('23-07-2012 00:00:00',
                                           'DD-MM-YYYY HH24:MI:SS')
                              AND TO_DATE ('24-07-2012 00:00:00',
                                           'DD-MM-YYYY HH24:MI:SS')
ORDER BY 4;

Let's try it for a day, e.g. 23/07/12:   
INSTANCETHREAD#START SEQUENCESTART TIMEEND SEQUENCEEND TIMEDURATION
OTE1143657122-07-12 23:59:4443657223-07-12 00:08:1200:08:27
OTE2240295122-07-12 23:59:4640295223-07-12 00:15:3100:15:45
OTE1143657223-07-12 00:08:1243657323-07-12 00:11:4800:03:36
.....................
OTE2240331523-07-12 23:57:5140331623-07-12 23:59:4200:01:51
OTE1143687323-07-12 23:58:1543687423-07-12 23:59:1000:00:54
OTE1143687423-07-12 23:59:1043687523-07-12 23:59:4000:00:29

ORA-13767 ORA-13780 creating SQL Tuning Task in 11.2.0.3


Tried to create a SQL Tuning Task from Grid Control 11g in a 11.2.0.3 database and I got a ORA-13767 error.
Tried to create it by running DBMS_SQLTUNE.CREATE_TUNING_TASK directly to the target database and got a ORA-13780.
SQL> SET SERVEROUTPUT ON
SQL>
SQL> DECLARE
2 L_SQL_TUNE_TASK_ID VARCHAR2(100);
3 BEGIN
4 L_SQL_TUNE_TASK_ID := DBMS_SQLTUNE.CREATE_TUNING_TASK (
5 SQL_ID => 'fcs0af97qvuft',
6 SCOPE => DBMS_SQLTUNE.SCOPE_COMPREHENSIVE,
7 TIME_LIMIT => 600,
8 TASK_NAME => 'fcs0af97qvuft_task',
9 DESCRIPTION => 'Tuning task for fcs0af97qvuft');
10 DBMS_OUTPUT.PUT_LINE('l_sql_tune_task_id: ' || L_SQL_TUNE_TASK_ID);
11 END;
12 /
DECLARE
*
ERROR AT LINE 1:
ORA-13780: SQL statement does not exist.
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.PRVT_SQLADV_INFRA", line 125
ORA-06512: at "SYS.DBMS_SQLTUNE", line 655
ORA-06512: at line 4


Installed DB Control in the target database and I got a ORA-13780 again.

If I specify in the task 2 snapshots that include an execution of the particular SQL statement, I get no error:
SQL> SET SERVEROUTPUT ON
SQL>
SQL> DECLARE
2 L_SQL_TUNE_TASK_ID VARCHAR2(100);
3 BEGIN
4 L_SQL_TUNE_TASK_ID := DBMS_SQLTUNE.CREATE_TUNING_TASK (
5 BEGIN_SNAP=>3935,
6 END_SNAP=>3937,
7 SQL_ID => 'fcs0af97qvuft',
8 SCOPE => DBMS_SQLTUNE.SCOPE_COMPREHENSIVE,
9 TIME_LIMIT => 600,
10 TASK_NAME => 'fcs0af97qvuft_task',
11 DESCRIPTION => 'Tuning task for fcs0af97qvuft');
12 DBMS_OUTPUT.PUT_LINE('l_sql_tune_task_id: ' || L_SQL_TUNE_TASK_ID);
13 END;
14 /
l_sql_tune_task_id: fcs0af97qvuft_task

PL/SQL procedure successfully completed.

You can find in which snapshots your query exists, by running the following query:
SELECT SNAP_ID
FROM DBA_HIST_SQLSTAT
WHERE SQL_ID='fcs0af97qvuft'
ORDER BY SNAP_ID;

The only relevant Oracle Note is 1445188.1, but it states that Bug 9957725 is fixed in DB Control 11.2.0.3 and 12g.

Oracle filed Bug 14342949 for this issue.

20/11/2012: From Oracle Development:
It looks like the error message is expected. Once the SQL is selected and you are on the SQL Details page : If the Statistics tab / panel shows "Snapshot Time : date/time to date/time : No data is available for this statement in this snapshot duration".

Selecting the "Schedule SQL Tuning Advisor" will run the dbms_sqltune.create_tuning_task with that begin_snap and end_snap, which correctly results in the ORA-13780 error. The screen will default to the current date/time, so customers need to select a snapshot on the statistics tab/pane with data prior to calling the "Schedule SQL Tuning Advisor".

It seems to be expected behaviour.

Tuesday, July 17, 2012

Use of alternate archive destination

If you want to specify a location to be an archive destination only in the event of a failure of another destination, you can make it an alternate destination.
In our scenario, we have a rather small archive destination, which we will fill and see how the alternate will be used.
SQL> SELECT DEST_NAME,STATUS,DESTINATION
FROM V$ARCHIVE_DEST
WHERE DESTINATION IS NOT NULL;

DEST_NAME            STATUS   DESTINATION
----------------------------------------------
LOG_ARCHIVE_DEST_1   VALID    /oratest/archives

According to Oracle Note 369120.1, LOG_ARCHIVE_DEST_1 must have the NOREOPEN attribute set, for this to work. So:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='LOCATION=/optim2/oratest/TESTDB/arch' SCOPE=BOTH;

System altered.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=/oratest/archives NOREOPEN ALTERNATE=LOG_ARCHIVE_DEST_2' SCOPE=BOTH;

System altered.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ALTERNATE SCOPE=BOTH;
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ALTERNATE SCOPE=BOTH
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-16028: new LOG_ARCHIVE_DEST_STATE_2 causes less destinations than
LOG_ARCHIVE_MIN_SUCCEED_DEST requires


SQL> SELECT DEST_NAME,STATUS,DESTINATION
FROM V$ARCHIVE_DEST
WHERE DESTINATION IS NOT NULL;

DEST_NAME           STATUS       DESTINATION
------------------------------------------------------------
LOG_ARCHIVE_DEST_1  ALTERNATE    /oratest/archives
LOG_ARCHIVE_DEST_2  ALTERNATE    /optim2/oratest/TESTDB/arch



SQL> show parameter LOG_ARCHIVE_DEST_STATE_1

NAME                      TYPE    VALUE
------------------------  ------  ------

log_archive_dest_state_1  string  ENABLE

I cannot understand why I get this error. I haven't changed LOG_ARCHIVE_DEST_1's state to ALTERNATE and the parameter shows its state is ENABLE not ALTERNATE.

I connect from another terminal and execute the same query:
SQL> SELECT DEST_NAME,STATUS,DESTINATION
FROM V$ARCHIVE_DEST
WHERE DESTINATION IS NOT NULL;

DEST_NAME           STATUS       DESTINATION
------------------------------------------------------------
LOG_ARCHIVE_DEST_1  VALID        /oratest/archives
LOG_ARCHIVE_DEST_2  ALTERNATE    /optim2/oratest/TESTDB/arch

!?!?!? I don't know why those query's results are different. If anyone has the time to test this case or know why this is happening, please, inform me.
The database version is 11.2.0.3.

Anyway, now we have an alternate destination and we will fill the default log destination.
This is the part from the alert log, where the default archive log destination fills and all new archive logs are placed in the alternate destination:
Thread 1 advanced to log sequence 144 (LGWR switch)
  Current log# 3 seq# 144 mem# 0: /optim2/oratest/TESTDB/redo/redo03.log
Tue Jul 17 12:21:10 2012
ARC3: Error 19502 Closing archive log file '/oratest/archives/1_143_787911367.arc'
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance TESTDB - Archival Error  LOG_ARCHIVE_DEST_1 filled
ORA-16038: log 2 sequence# 143 cannot be archived
ORA-19502: write error on file "", block number  (block size=)
ORA-00312: online log 2 thread 1: '/optim2/oratest/TESTDB/redo/redo02.log'
Archived Log entry 139 added for thread 1 sequence 143 ID 0x9941a747 dest 2:
Archiver process freed from errors. No longer stopped LOG_ARCHIVE_DEST_2 starts being used.

The default destination is DISABLED and the ALTERNATE has become VALID:
SQL> SELECT DEST_NAME,STATUS,DESTINATION
FROM V$ARCHIVE_DEST
WHERE DESTINATION IS NOT NULL;

DEST_NAME           STATUS     DESTINATION
----------------------------------------------------------
LOG_ARCHIVE_DEST_1  DISABLED   /oratest/archives
LOG_ARCHIVE_DEST_2  VALID      /optim2/oratest/TESTDB/arch

SQL> show parameter LOG_ARCHIVE_DEST_STATE

NAME                      TYPE    VALUE
------------------------  ------  -------
log_archive_dest_state_1  string  ENABLE
log_archive_dest_state_2  string  ALTERNATE

We backup and delete the archive logs using RMAN to free all space from the destinations.
The destinations' status will not change. All new archive logs will be placed in  LOG_ARCHIVE_DEST_2.
So, to go back to previous state, we have to reconfigure the destinations:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE SCOPE=BOTH;

System altered.

SQL> SELECT DEST_NAME,STATUS,DESTINATION
FROM V$ARCHIVE_DEST
WHERE DESTINATION IS NOT NULL;

DEST_NAME           STATUS   DESTINATION
--------------------------------------------------------
LOG_ARCHIVE_DEST_1  VALID    /oratest/archives
LOG_ARCHIVE_DEST_2  VALID    /optim2/oratest/TESTDB/arch

In this state, archive logs will be written in BOTH destinations, so:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ALTERNATE SCOPE=BOTH;

System altered.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=/oratest/archives NOREOPEN ALTERNATE=LOG_ARCHIVE_DEST_2' SCOPE=BOTH;

System altered.

SQL> SELECT DEST_NAME,STATUS,DESTINATION
FROM V$ARCHIVE_DEST
WHERE DESTINATION IS NOT NULL;

DEST_NAME           STATUS     DESTINATION
----------------------------------------------------------
LOG_ARCHIVE_DEST_1  ALTERNATE  /oratest/archives
LOG_ARCHIVE_DEST_2  ALTERNATE  /optim2/oratest/TESTDB/arch

AGAIN,  V$ARCHIVE_DEST output is not correct. We can't have 2 alternate and no valid destination.
From the second session:
SELECT DEST_NAME,STATUS,DESTINATION
FROM V$ARCHIVE_DEST
WHERE DESTINATION IS NOT NULL;

DEST_NAME           STATUS     DESTINATION
-----------------------------------------------------------
LOG_ARCHIVE_DEST_1  VALID      /oratest/archives
LOG_ARCHIVE_DEST_2  ALTERNATE  /optim2/oratest/TESTDB/arch

This is the correct configuration. Now, new archive logs will be placed in LOG_ARCHIVE_DEST_1 and not in LOG_ARCHIVE_DEST_2.

23/07/12: I opened an SR for this incident and filed Bug 14360414.
25/09/12: Oracle closed my SR, since bug's impact is not high, but kept bug open with status 11 - Code Bug (Response/Resolution).

Monday, July 16, 2012

Tablespace growth and usage monitoring

We want to create a mechanism to monitor growth of our tablespaces and how much space is used in them. We will use DBA_TABLESPACE_USAGE_METRICS view and store its output in a table called HELPDESK.TABLESPACE_STATS.

If you use autoextendible datafiles, according to Oracle Note 455715.1:
Tablespace_size in DBA_TABLESPACE_USAGE_METRICS takes the maximum file size for autoextensible tablespace which corresponds to maxblocks in dba_data_files.
We create HELPDESK.TABLESPACE_STATS and its indexes:
CREATE TABLE HELPDESK.TABLESPACE_STATS
(
  TABLESPACE_NAME  VARCHAR2(30 BYTE),
  TABLESPACE_SIZE  NUMBER,
  USED_SPACE       NUMBER,
  USED_PERCENT     NUMBER,
  COLLECTION_DATE  DATE
)
TABLESPACE USERS
PCTUSED    0
PCTFREE    10
INITRANS   1
MAXTRANS   255
STORAGE    (
            INITIAL          1M
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
LOGGING 
NOCOMPRESS 
NOCACHE
NOPARALLEL
MONITORING;


CREATE INDEX HELPDESK.DATE_IDX ON HELPDESK.TABLESPACE_STATS
(COLLECTION_DATE)
LOGGING
TABLESPACE USERS
PCTFREE    10
INITRANS   2
MAXTRANS   255
STORAGE    (
            INITIAL          1M
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
NOPARALLEL;


CREATE INDEX HELPDESK.NAME_IDX ON HELPDESK.TABLESPACE_STATS
(TABLESPACE_NAME)
LOGGING
TABLESPACE USERS
PCTFREE    10
INITRANS   2
MAXTRANS   255
STORAGE    (
            INITIAL          1M
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
NOPARALLEL;

Now, schedule a job to run the following INSERT. I choose to run it once per week.
INSERT INTO HELPDESK.TABLESPACE_STATS (TABLESPACE_NAME,
                                       USED_SPACE,
                                       TABLESPACE_SIZE,
                                       USED_PERCENT,
                                       COLLECTION_DATE)
   SELECT A.TABLESPACE_NAME,
          A.USED_SPACE * B.BLOCK_SIZE,
          A.TABLESPACE_SIZE * B.BLOCK_SIZE,
          A.USED_PERCENT,
          SYSDATE
     FROM DBA_TABLESPACE_USAGE_METRICS A, DBA_TABLESPACES B
    WHERE A.TABLESPACE_NAME = B.TABLESPACE_NAME;

Let's see a scenario, where we have collected 3 weeks of data:
  SELECT TRUNC (COLLECTION_DATE) "DATE",
         TABLESPACE_NAME,
         ROUND (TABLESPACE_SIZE / 1024 / 1024, 2) "SIZE MB",
         ROUND (USED_SPACE / 1024 / 1024, 2) "USED SPACE MB",
         ROUND (USED_PERCENT, 2) "USED %"
    FROM HELPDESK.TABLESPACE_STATS
ORDER BY 1, 2;
   
DATETABLESPACE_NAMESIZE MBUSED SPACE MBUSED %
02/07/2012IDX302066.67
02/07/2012SYSAUX600418.5669.76
02/07/2012SYSTEM700362.2551.75
02/07/2012TBL402255
02/07/2012TEMP2000
02/07/2012UNDOTBS12453.251.33
02/07/2012USERS5480
09/07/2012IDX302583.33
09/07/2012SYSAUX600418.5669.76
09/07/2012SYSTEM700362.2551.75
09/07/2012TBL452657.78
09/07/2012TEMP2000
09/07/2012UNDOTBS12453.251.33
09/07/2012USERS54.896
16/07/2012IDX3528.882.29
16/07/2012SYSAUX600418.5669.76
16/07/2012SYSTEM700362.2551.75
16/07/2012TBL453577.78
16/07/2012TEMP2000
16/07/2012UNDOTBS12453.251.33
16/07/2012USERS75.578.57

If we want to see the differences between two dates:
  SELECT LAST.TABLESPACE_NAME,
         ROUND ( (LAST.TABLESPACE_SIZE - FIRST.TABLESPACE_SIZE) / 1024 / 1024,
                2)
            "SIZE GROWTH MB",
         ROUND ( (LAST.USED_SPACE - FIRST.USED_SPACE) / 1024 / 1024, 2)
            "USED SPACE GROWTH MB",
         ROUND (LAST.USED_PERCENT - FIRST.USED_PERCENT, 2) "USED % GROWTH"
    FROM (SELECT TABLESPACE_NAME,
                 TABLESPACE_SIZE,
                 USED_SPACE,
                 USED_PERCENT
            FROM HELPDESK.TABLESPACE_STATS
           WHERE TRUNC (COLLECTION_DATE) = '16-JUL-12') LAST,
         (SELECT TABLESPACE_NAME,
                 TABLESPACE_SIZE,
                 USED_SPACE,
                 USED_PERCENT
            FROM HELPDESK.TABLESPACE_STATS
           WHERE TRUNC (COLLECTION_DATE) = '02-JUL-2012') FIRST
   WHERE LAST.TABLESPACE_NAME = FIRST.TABLESPACE_NAME
ORDER BY 1;

TABLESPACE_NAMESIZE GROWTH MBUSED SPACE GROWTH MBUSED % GROWTH
IDX58.815.62
SYSAUX000
SYSTEM000
TBL51322.78
TEMP000
UNDOTBS1000
USERS21.5-1.43

So, for instance, in 2 weeks, for tablespace TBL we gave 5 MB extra space, but 13 MB were additionally used, thus increasing its used percent by 22,78%.

Wednesday, July 11, 2012

Concurrent processes information script

The following query will display useful information about your eBS' concurrent processes:
  SELECT P.CONCURRENT_PROCESS_ID,
         QU.USER_CONCURRENT_QUEUE_NAME,
         QU.CONCURRENT_QUEUE_NAME,
         DECODE (P.PROCESS_STATUS_CODE,
                 'A', 'Active',
                 'G', 'Awaiting Discovery',
                 'C', 'Connecting',
                 'S', 'Deactivated',
                 'D', 'Deactiviating',
                 'Z', 'Initializing',
                 'M', 'Migrating',
                 'R', 'Running',
                 'P', 'Suspended',
                 'K', 'Terminated',
                 'T', 'Terminating',
                 'Unknown')
            "STATUS",
         P.NODE_NAME,
         P.OS_PROCESS_ID,
         P.DB_NAME,
         P.DB_INSTANCE,
         SS.SID,
         P.ORACLE_PROCESS_ID,
         P.PROCESS_START_DATE,
         P.MANAGER_TYPE,
         P.LOGFILE_NAME
    FROM APPLSYS.FND_CONCURRENT_PROCESSES P,
         APPS.FND_CONCURRENT_QUEUES_VL QU,
         GV$SESSION SS
   WHERE     P.CONCURRENT_QUEUE_ID = QU.CONCURRENT_QUEUE_ID
         AND P.SESSION_ID = SS.AUDSID(+)
         AND P.INSTANCE_NUMBER = SS.INST_ID(+)

Tuesday, July 10, 2012

Concurrent programs maximum, minimum and average execution times report

The following script will give you the maximum, minimum and average execution times of each concurrent program had been executed in your eBS. It will measure only the successfully completed concurrent requests.
SELECT P.CONCURRENT_PROGRAM_NAME,
         PT.USER_CONCURRENT_PROGRAM_NAME,
         COUNT (*),
            TRUNC (MAX (F.ACTUAL_COMPLETION_DATE - F.ACTUAL_START_DATE))
         || ' Days'
         || ' + '
         || TO_CHAR (
               TRUNC (SYSDATE)
               + NUMTODSINTERVAL (
                    MAX ( (F.ACTUAL_COMPLETION_DATE - F.ACTUAL_START_DATE))
                    * 86400,
                    'second'),
               'HH24:MI:SS')
            "MAXIMUM",
            TRUNC (MIN (F.ACTUAL_COMPLETION_DATE - F.ACTUAL_START_DATE))
         || ' Days'
         || ' + '
         || TO_CHAR (
               TRUNC (SYSDATE)
               + NUMTODSINTERVAL (
                    MIN ( (F.ACTUAL_COMPLETION_DATE - F.ACTUAL_START_DATE))
                    * 86400,
                    'second'),
               'HH24:MI:SS')
            "MINIMUM",
            TRUNC (AVG (F.ACTUAL_COMPLETION_DATE - F.ACTUAL_START_DATE))
         || ' Days'
         || ' + '
         || TO_CHAR (
               TRUNC (SYSDATE)
               + NUMTODSINTERVAL (
                    AVG ( (F.ACTUAL_COMPLETION_DATE - F.ACTUAL_START_DATE))
                    * 86400,
                    'second'),
               'HH24:MI:SS')
            "AVERAGE"
    FROM APPLSYS.FND_CONCURRENT_REQUESTS F,
         APPLSYS.FND_CONCURRENT_PROGRAMS P,
         APPLSYS.FND_USER A,
         APPLSYS.FND_CONCURRENT_PROGRAMS_TL PT
   WHERE     PHASE_CODE = 'C'
         AND STATUS_CODE = 'C'
         AND F.CONCURRENT_PROGRAM_ID = P.CONCURRENT_PROGRAM_ID
         AND P.CONCURRENT_PROGRAM_ID = PT.CONCURRENT_PROGRAM_ID
         AND A.USER_ID(+) = F.REQUESTED_BY
GROUP BY P.CONCURRENT_PROGRAM_NAME, PT.USER_CONCURRENT_PROGRAM_NAME;

CONCURRENT_PROGRAM_NAMEUSER_CONCURRENT_PROGRAM_NAMECOUNT (*)MAXIMUMMINIMUMAVERAGE
XXI_UPD_CONF_ORDER_STATUSXXI : Update Order Status for Confirmed Orders330760 Days + 00:10:510 Days + 00:00:040 Days + 00:00:11
FNDWFBGWorkflow Background Process489058 Days + 13:34:250 Days + 00:00:000 Days + 00:02:42
FAPROJDepreciation Projection370 Days + 03:12:100 Days + 00:00:350 Days + 00:45:31
XXIEX_RUN_STRATEGIESRequest Set OTE Run Strategies1080 Days + 19:42:550 Days + 07:17:490 Days + 12:51:35
..................

Database frozen due to full archive log destination

Database froze, due to archive log destination being full and you can only connect as SYSDBA.
You can add one more archive log destination to a filesystem with enough free space:
SQL> alter system set log_archive_dest_1='LOCATION=/oratest/TESTDB/arch' scope=memory;
System altered.

SQL> archive log list;
Database log mode           Archive Mode
Automatic archival           Enabled
Archive destination           USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     33
Next log sequence to archive   35
Current log sequence           35

Database will try to put logs on both destinations, but if one of them is full, it will fail and will put it only in the second destination. This is not a problem.
This depends on  LOG_ARCHIVE_MIN_SUCCEED_DEST parameter [default=1], which defines the minimum number of destinations that must succeed in order for the online logfile to be available for reuse.
Use RMAN to backup and delete your archive logs and when the incident is resolved, you may undo the change:
SQL> alter system set log_archive_dest_1='' scope=memory;
System altered.

SQL> archive log list;
Database log mode           Archive Mode
Automatic archival           Enabled
Archive destination           USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     33
Next log sequence to archive   35
Current log sequence           35

Another way is to manually move some archive logs to a new destination.
Let's say our archive log destination is full.
Our current archive logs are:
./archivelog/2012_07_10:
total 20
-rw-r----- 1 oratest oinstall 1024 Jul 10 08:34 o1_mf_1_30_7zqhpr63_.arc
-rw-r----- 1 oratest oinstall 1024 Jul 10 08:34 o1_mf_1_31_7zqhps7b_.arc
-rw-r----- 1 oratest oinstall 1024 Jul 10 08:34 o1_mf_1_32_7zqhpt0t_.arc
-rw-r----- 1 oratest oinstall 1024 Jul 10 08:34 o1_mf_1_33_7zqhptqk_.arc
-rw-r----- 1 oratest oinstall 1536 Jul 10 08:34 o1_mf_1_34_7zqhpvkw_.arc

We may query V$ARCHIVED_LOG:
SELECT NAME, COMPLETION_TIME, DELETED, STATUS 
FROM V$ARCHIVED_LOG
ORDER BY COMPLETION_TIME;

NAMECOMPLETION_TIMEDELETEDSTATUS
............
10/7/2012 08:33:14YESD
/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_30_7zqhpr63_.arc10/7/2012 08:34:16NOA
/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_31_7zqhps7b_.arc10/7/2012 08:34:17NOA
/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_32_7zqhpt0t_.arc10/7/2012 08:34:18NOA
/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_33_7zqhptqk_.arc10/7/2012 08:34:18NOA
/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_34_7zqhpvkw_.arc10/7/2012 08:34:19NOA

Rows with NAME=NULL, DELETED=YES, STATUS=D [Deleted] are archive logs backed up and deleted by RMAN.
Those with a name, DELETED=NO, STATUS=A [Available] are still in your archive log destination.

We move one archive log to a new filesystem:
[oratest@testsrv 2012_07_10]$ mv o1_mf_1_30_7zqhpr63_.arc /oratest/TESTDB/arch/
[oratest@testsrv 2012_07_10]$ ll
total 16
-rw-r----- 1 oratest oinstall 1024 Jul 10 08:34 o1_mf_1_31_7zqhps7b_.arc
-rw-r----- 1 oratest oinstall 1024 Jul 10 08:34 o1_mf_1_32_7zqhpt0t_.arc
-rw-r----- 1 oratest oinstall 1024 Jul 10 08:34 o1_mf_1_33_7zqhptqk_.arc
-rw-r----- 1 oratest oinstall 1536 Jul 10 08:34 o1_mf_1_34_7zqhpvkw_.arc
[oratest@testsrv 2012_07_10]$ ll /oratest/TESTDB/arch/
total 4
-rw-r----- 1 oratest oinstall 1024 Jul 10 08:34 o1_mf_1_30_7zqhpr63_.arc

We try to backup  our archive logs using RMAN:
RMAN> RUN {
ALLOCATE CHANNEL ch01 TYPE DISK;
ALLOCATE CHANNEL ch02 TYPE DISK;
ALLOCATE CHANNEL ch03 TYPE DISK;
ALLOCATE CHANNEL ch04 TYPE DISK;
BACKUP INCREMENTAL LEVEL=0
    ARCHIVELOG ALL DELETE INPUT;
}

allocated channel: ch01
channel ch01: SID=50 device type=DISK

allocated channel: ch02
channel ch02: SID=69 device type=DISK

allocated channel: ch03
channel ch03: SID=83 device type=DISK

allocated channel: ch04
channel ch04: SID=100 device type=DISK

Starting backup at 10-JUL-12
current log archived
released channel: ch01
released channel: ch02
released channel: ch03
released channel: ch04
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 07/10/2012 09:42:30
RMAN-06059: expected archived log not found, loss of archived log compromises recoverability
ORA-19625: error identifying file /oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_30_7zqhpr63_.arc
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

We got an error and nothing was backed up and deleted. The moved file is missing.
We need to use RMAN CROSSCHECK command:
RMAN> CROSSCHECK ARCHIVELOG ALL;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=50 device type=DISK
validation failed for archived log
archived log file name=/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_30_7zqhpr63_.arc RECID=15 STAMP=788258056
validation succeeded for archived log
archived log file name=/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_31_7zqhps7b_.arc RECID=16 STAMP=788258057
validation succeeded for archived log
archived log file name=/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_32_7zqhpt0t_.arc RECID=17 STAMP=788258058
validation succeeded for archived log
archived log file name=/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_33_7zqhptqk_.arc RECID=18 STAMP=788258058
validation succeeded for archived log
archived log file name=/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_34_7zqhpvkw_.arc RECID=19 STAMP=788258059
validation succeeded for archived log
archived log file name=/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_35_7zqmppbw_.arc RECID=20 STAMP=788262150
Crosschecked 6 objects

Querying V$ARCHIVED_LOG we can see that moved log's STATUS changed to X [Expired]:
NAMECOMPLETION_TIMEDELETEDSTATUS
............
/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_30_7zqhpr63_.arc10/7/2012 08:34:16NOX
............

We retry to backup the archive logs:
RMAN> RUN {
ALLOCATE CHANNEL ch01 TYPE DISK;
ALLOCATE CHANNEL ch02 TYPE DISK;
ALLOCATE CHANNEL ch03 TYPE DISK;
ALLOCATE CHANNEL ch04 TYPE DISK;
BACKUP INCREMENTAL LEVEL=0
    ARCHIVELOG ALL DELETE INPUT;
}

released channel: ORA_DISK_1
allocated channel: ch01
channel ch01: SID=50 device type=DISK

allocated channel: ch02
channel ch02: SID=69 device type=DISK

allocated channel: ch03
channel ch03: SID=83 device type=DISK

allocated channel: ch04
channel ch04: SID=100 device type=DISK

Starting backup at 10-JUL-12
current log archived
channel ch01: starting compressed archived log backup set
channel ch01: specifying archived log(s) in backup set
input archived log thread=1 sequence=31 RECID=16 STAMP=788258057
input archived log thread=1 sequence=32 RECID=17 STAMP=788258058
channel ch01: starting piece 1 at 10-JUL-12
channel ch02: starting compressed archived log backup set
channel ch02: specifying archived log(s) in backup set
input archived log thread=1 sequence=33 RECID=18 STAMP=788258058
input archived log thread=1 sequence=34 RECID=19 STAMP=788258059
channel ch02: starting piece 1 at 10-JUL-12
channel ch03: starting compressed archived log backup set
channel ch03: specifying archived log(s) in backup set
input archived log thread=1 sequence=35 RECID=20 STAMP=788262150
channel ch03: starting piece 1 at 10-JUL-12
channel ch04: starting compressed archived log backup set
channel ch04: specifying archived log(s) in backup set
input archived log thread=1 sequence=36 RECID=21 STAMP=788262734
channel ch04: starting piece 1 at 10-JUL-12
channel ch01: finished piece 1 at 10-JUL-12
piece handle=/oratest/TESTDB/fast_recovery_area/TESTDB/backupset/2012_07_10/o1_mf_annnn_TAG20120710T095214_7zqn8ync_.bkp tag=TAG20120710T095214 comment=NONE
channel ch01: backup set complete, elapsed time: 00:00:00
channel ch01: deleting archived log(s)
archived log file name=/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_31_7zqhps7b_.arc RECID=16 STAMP=788258057
archived log file name=/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_32_7zqhpt0t_.arc RECID=17 STAMP=788258058
channel ch02: finished piece 1 at 10-JUL-12
piece handle=/oratest/TESTDB/fast_recovery_area/TESTDB/backupset/2012_07_10/o1_mf_annnn_TAG20120710T095214_7zqn8ynp_.bkp tag=TAG20120710T095214 comment=NONE
channel ch02: backup set complete, elapsed time: 00:00:00
channel ch02: deleting archived log(s)
archived log file name=/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_33_7zqhptqk_.arc RECID=18 STAMP=788258058
archived log file name=/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_34_7zqhpvkw_.arc RECID=19 STAMP=788258059
channel ch04: finished piece 1 at 10-JUL-12
piece handle=/oratest/TESTDB/fast_recovery_area/TESTDB/backupset/2012_07_10/o1_mf_annnn_TAG20120710T095214_7zqn8yor_.bkp tag=TAG20120710T095214 comment=NONE
channel ch04: backup set complete, elapsed time: 00:00:00
channel ch04: deleting archived log(s)
archived log file name=/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_36_7zqn8yf3_.arc RECID=21 STAMP=788262734
channel ch03: finished piece 1 at 10-JUL-12
piece handle=/oratest/TESTDB/fast_recovery_area/TESTDB/backupset/2012_07_10/o1_mf_annnn_TAG20120710T095214_7zqn8yof_.bkp tag=TAG20120710T095214 comment=NONE
channel ch03: backup set complete, elapsed time: 00:00:00
channel ch03: deleting archived log(s)
archived log file name=/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_35_7zqmppbw_.arc RECID=20 STAMP=788262150
Finished backup at 10-JUL-12
released channel: ch01
released channel: ch02
released channel: ch03
released channel: ch04

The archive log destination is now cleared:
/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10
[oratest@testsrv 2012_07_10]$ ll
total 0

All entries in V$ARCHIVED_LOG now have NAME=NULL, DELETED=YES, STATUS=D, except the log we moved.

Now we return that log in the archive log destination:
[oratest@testsrv 2012_07_10]$ mv /oratest/TESTDB/arch/o1_mf_1_30_7zqhpr63_.arc .
[oratest@testsrv 2012_07_10]$ ll
total 4
-rw-r----- 1 oratest oinstall 1024 Jul 10 08:34 o1_mf_1_30_7zqhpr63_.arc

We CROSSCHECK once more:
RMAN> CROSSCHECK ARCHIVELOG ALL;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=50 device type=DISK
validation succeeded for archived log
archived log file name=/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_30_7zqhpr63_.arc RECID=15 STAMP=788258056
Crosschecked 1 objects

Querying V$ARCHIVED_LOG, we see that log is available again:
NAMECOMPLETION_TIMEDELETEDSTATUS
............
/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_30_7zqhpr63_.arc10/7/2012 08:34:16NOA
............

We take one more backup to delete it:
RMAN> RUN {
ALLOCATE CHANNEL ch01 TYPE DISK;
ALLOCATE CHANNEL ch02 TYPE DISK;
ALLOCATE CHANNEL ch03 TYPE DISK;
ALLOCATE CHANNEL ch04 TYPE DISK;
BACKUP INCREMENTAL LEVEL=0
    ARCHIVELOG ALL DELETE INPUT;
}

released channel: ORA_DISK_1
allocated channel: ch01
channel ch01: SID=50 device type=DISK

allocated channel: ch02
channel ch02: SID=69 device type=DISK

allocated channel: ch03
channel ch03: SID=83 device type=DISK

allocated channel: ch04
channel ch04: SID=100 device type=DISK

Starting backup at 10-JUL-12
current log archived
channel ch01: starting compressed archived log backup set
channel ch01: specifying archived log(s) in backup set
input archived log thread=1 sequence=30 RECID=15 STAMP=788258056
channel ch01: starting piece 1 at 10-JUL-12
channel ch02: starting compressed archived log backup set
channel ch02: specifying archived log(s) in backup set
input archived log thread=1 sequence=37 RECID=22 STAMP=788263417
channel ch02: starting piece 1 at 10-JUL-12
channel ch01: finished piece 1 at 10-JUL-12
piece handle=/oratest/TESTDB/fast_recovery_area/TESTDB/backupset/2012_07_10/o1_mf_annnn_TAG20120710T100337_7zqny9l8_.bkp tag=TAG20120710T100337 comment=NONE
channel ch01: backup set complete, elapsed time: 00:00:01
channel ch01: deleting archived log(s)
archived log file name=/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_30_7zqhpr63_.arc RECID=15 STAMP=788258056
channel ch02: finished piece 1 at 10-JUL-12
piece handle=/optim2/oratest/TESTDB/fast_recovery_area/TESTDB/backupset/2012_07_10/o1_mf_annnn_TAG20120710T100337_7zqny9lj_.bkp tag=TAG20120710T100337 comment=NONE
channel ch02: backup set complete, elapsed time: 00:00:01
channel ch02: deleting archived log(s)
archived log file name=/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10/o1_mf_1_37_7zqny9fo_.arc RECID=22 STAMP=788263417
Finished backup at 10-JUL-12
released channel: ch01
released channel: ch02
released channel: ch03
released channel: ch04

[oratest@testsrv 2012_07_10]$ pwd
/oratest/TESTDB/fast_recovery_area/TESTDB/archivelog/2012_07_10
[oratest@testsrv 2012_07_10]$ ll
total 0

Monday, July 9, 2012

Concurrent requests information scripts

The first script will report information about your concurrent requests, such as if it's scheduled, running or completed, which users run them, request IDs, their arguments.
For example, to find all runs of Workflow Background Process program:
    SELECT F.REQUEST_ID,
         A.USER_NAME,
         DECODE (TO_CHAR (F.ACTUAL_START_DATE, 'DD/MM/YYYY HH24:MI:SS'),
                 NULL, 'PENDING',
                 TO_CHAR (F.ACTUAL_START_DATE, 'DD/MM/YYYY HH24:MI:SS'))
            DATE_STARTED,
         DECODE (
            DECODE (
               TRUNC (F.ACTUAL_COMPLETION_DATE - F.ACTUAL_START_DATE),
               0, NULL,
                  TRUNC (F.ACTUAL_COMPLETION_DATE - F.ACTUAL_START_DATE)
               || ' Days'
               || ' + ')
            || TO_CHAR (
                  TRUNC (SYSDATE)
                  + NUMTODSINTERVAL (
                       (F.ACTUAL_COMPLETION_DATE - F.ACTUAL_START_DATE) * 86400,
                       'second'),
                  'HH24:MI:SS'),
            ' Days + ', 'RUNNING',
            DECODE (
               TRUNC (F.ACTUAL_COMPLETION_DATE - F.ACTUAL_START_DATE),
               0, NULL,
                  TRUNC (F.ACTUAL_COMPLETION_DATE - F.ACTUAL_START_DATE)
               || ' Days'
               || ' + ')
            || TO_CHAR (
                  TRUNC (SYSDATE)
                  + NUMTODSINTERVAL (
                       (F.ACTUAL_COMPLETION_DATE - F.ACTUAL_START_DATE) * 86400,
                       'second'),
                  'HH24:MI:SS'))
            REQ_DURATION,
         F.COMPLETION_TEXT,
         P.CONCURRENT_PROGRAM_NAME CONC_PROGRAM,
         F.ARGUMENT_TEXT
    FROM APPLSYS.FND_CONCURRENT_REQUESTS F,
         APPLSYS.FND_USER A,
         APPLSYS.FND_CONCURRENT_PROGRAMS P,
         APPLSYS.FND_CONCURRENT_PROGRAMS_TL PT
   WHERE     A.USER_ID(+) = F.REQUESTED_BY
         AND F.CONCURRENT_PROGRAM_ID = P.CONCURRENT_PROGRAM_ID
         AND P.CONCURRENT_PROGRAM_ID = PT.CONCURRENT_PROGRAM_ID
         AND PT.USER_CONCURRENT_PROGRAM_NAME = 'Workflow Background Process'
         AND PT.LANGUAGE = 'US'
ORDER BY F.REQUESTED_START_DATE DESC;

REQUEST_IDUSER_NAMEDATE_STARTEDREQ_DURATIONCOMPLETION_TEXTCONC_PROGRAMARGUMENT_TEXT
39363160COLL_SETUPPENDINGRUNNINGFNDWFBGXXSTRYRE, , , N, Y, Y
39363194COLL_SETUP09/07/2012 12:00:0300:00:09Normal completionFNDWFBGXXSTRYRE, , , Y, N, N
39362136SYSADMIN09/07/2012 11:36:10RUNNINGFNDWFBGOEOL, , , Y, N, N
.....................
38021590SYSADMIN09/05/2012 10:25:034 Days + 23:06:07FNDWFBGOEOL, , , Y, N, N

If DATE_STARTED has a value of PENDING, then this concurrent request has not started yet. Ignore, in this case, the RUNNING value of REQ_DURATION.

You may filter the result by using specific columns in the WHERE clause.

PHASE_CODE and STATUS_CODE of APPLSYS.FND_CONCURRENT_REQUESTS table.
PHASE_CODE
VALUEMEANING
CCompleted
IInactive
PPending
RRunning

STATUS_CODE
VALUEMEANING
AWaiting
BResuming
CNormal
DCancelled
EError
FScheduled
GWarning
HOn hold
INormal
MNo manager
QStandby
RNormal
SSuspended
TTerminating
UDisabled
WPaused
XTerminated
ZWaiting

CONCURRENT_PROGRAM_NAME of APPLSYS.FND_CONCURRENT_PROGRAMS table, if you want to use its short name (FNDWFBG for our case).
ACTUAL_START_DATE of APPLSYS.FND_CONCURRENT_REQUESTS to limit the result by date.
USER_NAME of APPLSYS.FND_USER to check runs of specific users.

The second script will report the running/pending concurrent requests, to which concurrent manager are assigned, the node they are running their process IDs, log and output files, and database session information.
  SELECT
         REQ.REQUEST_ID "REQUEST ID",
         USR.USER_NAME "USER",
         QU.USER_CONCURRENT_QUEUE_NAME "CONC. MANAGER",
         REQ.REQUEST_DESCRIPTION "NAME",
         REQ.CONCURRENT_PROGRAM_NAME "SHORT NAME",
         REQ.ARGUMENT_TEXT "ARGUMENTS",
         DECODE (REQ.PHASE_CODE,
                 'R', 'RUNNING',
                 'P', 'PENDING',
                 'I', 'INCACTIVE')
            "PHASE",
         REQ.REQUEST_DATE "DATE SUBMITTED",
         REQ.REQUESTED_START_DATE "REQUESTED START DATE",
         REQ.ACTUAL_START_DATE "DATE STARTED",
         REQ.LOGFILE_NODE_NAME "NODE",
         REQ.LOGFILE_NAME "LOG",
         REQ.OUTFILE_NAME "OUTPUT",
         REQ.OS_PROCESS_ID "NODE PROCESS",
         REQ.ORACLE_PROCESS_ID "DB PROCESS",
         SS.INST_ID "DB INSTANCE",
         SS.SID "DB SID",
         SS.STATUS "DB SESSION STATUS"
    FROM APPS.FND_CONCURRENT_WORKER_REQUESTS REQ,
         APPLSYS.FND_USER USR,
         GV$SESSION SS,
         APPS.FND_CONCURRENT_QUEUES_VL QU
   WHERE     USR.USER_ID = REQ.REQUESTED_BY
         AND REQ.ORACLE_SESSION_ID = SS.AUDSID(+)
         AND REQ.CONCURRENT_QUEUE_ID = QU.CONCURRENT_QUEUE_ID
ORDER BY REQUEST_ID DESC;

Archived log generation volume and count report

V$ARCHIVED_LOG displays information about the archived logs generated by your database and haven't yet aged out from your control file.
We are going to use this view to generate a report, displaying some useful information about it.

BLOCKS column stores archived log's size in blocks, BLOCK_SIZE stores block size in bytes, so if we multiply them (BLOCKS*BLOCK_SIZE) we get its size in bytes.
ARCHIVED indicates whether the log is archived and DELETED whether RMAN has deleted it.

The following query displays per day the volume in MBytes of archived logs generated, deleted and of those that haven't yet been deleted by RMAN.
  SELECT SUM_ARCH.DAY,
         SUM_ARCH.GENERATED_MB,
         SUM_ARCH_DEL.DELETED_MB,
         SUM_ARCH.GENERATED_MB - SUM_ARCH_DEL.DELETED_MB "REMAINING_MB"
    FROM (  SELECT TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY') DAY,
                   SUM (ROUND ( (blocks * block_size) / (1024 * 1024), 2))
                      GENERATED_MB
              FROM V$ARCHIVED_LOG
             WHERE ARCHIVED = 'YES'
          GROUP BY TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY')) SUM_ARCH,
         (  SELECT TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY') DAY,
                   SUM (ROUND ( (blocks * block_size) / (1024 * 1024), 2))
                      DELETED_MB
              FROM V$ARCHIVED_LOG
             WHERE ARCHIVED = 'YES' AND DELETED = 'YES'
          GROUP BY TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY')) SUM_ARCH_DEL
   WHERE SUM_ARCH.DAY = SUM_ARCH_DEL.DAY(+)
ORDER BY TO_DATE (DAY, 'DD/MM/YYYY');

DAYGENERATED_MBDELETED_MBREMAINING_MB
29/06/2012315.77315.770
30/06/2012158.57158.570
01/07/2012153.09153.090
............
07/07/20121,040.221,040.220
08/07/2012637.59637.590
09/07/2012328.72174.28154.44

The following report will display the number of archived logs generated per hour per day:
  SELECT TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY') DAY,
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '00', 1, NULL))
            "00-01",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '01', 1, NULL))
            "01-02",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '02', 1, NULL))
            "02-03",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '03', 1, NULL))
            "03-04",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '04', 1, NULL))
            "04-05",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '05', 1, NULL))
            "05-06",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '06', 1, NULL))
            "06-07",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '07', 1, NULL))
            "07-08",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '08', 1, NULL))
            "08-09",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '09', 1, NULL))
            "09-10",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '10', 1, NULL))
            "10-11",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '11', 1, NULL))
            "11-12",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '12', 1, NULL))
            "12-13",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '13', 1, NULL))
            "13-14",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '14', 1, NULL))
            "14-15",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '15', 1, NULL))
            "15-16",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '16', 1, NULL))
            "16-17",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '17', 1, NULL))
            "17-18",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '18', 1, NULL))
            "18-19",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '19', 1, NULL))
            "19-20",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '20', 1, NULL))
            "20-21",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '21', 1, NULL))
            "21-22",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '22', 1, NULL))
            "22-23",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '23', 1, NULL))
            "23-00",
         COUNT (*) TOTAL
    FROM V$ARCHIVED_LOG
WHERE ARCHIVED='YES'
GROUP BY TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY')
ORDER BY TO_DATE (DAY, 'DD/MM/YYYY');
   
DAY00-0101-0202-0303-04...20-2121-2222-2323-00TOTAL
29/06/2012...129
30/06/20121...16
01/07/20121...16
.................................
07/07/2012111...212229
08/07/2012111...11119
09/07/201221...9

So, for instance, at 30/06/12, between 03:00 and 04:00, no log was archived.
At 07/07/12, between 20:00 and 21:00, 2 logs were archived and, at that day, 29 logs were archived totally.

The combination of these scripts is:
  SELECT LOG_HISTORY.*,
         SUM_ARCH.GENERATED_MB,
         SUM_ARCH_DEL.DELETED_MB,
         SUM_ARCH.GENERATED_MB - SUM_ARCH_DEL.DELETED_MB "REMAINING_MB"
    FROM (  SELECT TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY') DAY,
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '00', 1, NULL))
                      "00-01",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '01', 1, NULL))
                      "01-02",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '02', 1, NULL))
                      "02-03",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '03', 1, NULL))
                      "03-04",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '04', 1, NULL))
                      "04-05",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '05', 1, NULL))
                      "05-06",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '06', 1, NULL))
                      "06-07",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '07', 1, NULL))
                      "07-08",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '08', 1, NULL))
                      "08-09",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '09', 1, NULL))
                      "09-10",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '10', 1, NULL))
                      "10-11",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '11', 1, NULL))
                      "11-12",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '12', 1, NULL))
                      "12-13",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '13', 1, NULL))
                      "13-14",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '14', 1, NULL))
                      "14-15",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '15', 1, NULL))
                      "15-16",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '16', 1, NULL))
                      "16-17",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '17', 1, NULL))
                      "17-18",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '18', 1, NULL))
                      "18-19",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '19', 1, NULL))
                      "19-20",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '20', 1, NULL))
                      "20-21",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '21', 1, NULL))
                      "21-22",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '22', 1, NULL))
                      "22-23",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '23', 1, NULL))
                      "23-00",
                   COUNT (*) TOTAL
              FROM V$ARCHIVED_LOG
             WHERE ARCHIVED = 'YES'
          GROUP BY TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY')) LOG_HISTORY,
         (  SELECT TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY') DAY,
                   SUM (ROUND ( (blocks * block_size) / (1024 * 1024), 2))
                      GENERATED_MB
              FROM V$ARCHIVED_LOG
             WHERE ARCHIVED = 'YES'
          GROUP BY TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY')) SUM_ARCH,
         (  SELECT TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY') DAY,
                   SUM (ROUND ( (blocks * block_size) / (1024 * 1024), 2))
                      DELETED_MB
              FROM V$ARCHIVED_LOG
             WHERE ARCHIVED = 'YES' AND DELETED = 'YES'
          GROUP BY TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY')) SUM_ARCH_DEL
   WHERE LOG_HISTORY.DAY = SUM_ARCH.DAY AND SUM_ARCH.DAY = SUM_ARCH_DEL.DAY(+)
ORDER BY TO_DATE (LOG_HISTORY.DAY, 'DD/MM/YYYY');