Thursday, December 1, 2011

Get email alerts, when someone accesses specific tables

I wanted to create a procedure, which will be run by a job every few seconds, and get email alerts every time someone accesses specific tables.
I want to know usernames, terminal names, some additional session info and the SQL statement executed.
I used UTL_MAIL package, which is not installed by default, so I had it installed.
Let's say I want to monitor a table, called AR_PAYMENT_SCHEDULES_ALL.
The procedure is:
CREATE OR REPLACE PROCEDURE SYSTEM.CHECKSQL
IS
   TYPE T_CURSOR IS REF CURSOR;

   V_CURSOR      T_CURSOR;
   V_INST        NUMBER;
   V_SID         NUMBER;
   V_SERIAL      NUMBER;
   V_USERNAME    VARCHAR2 (30);
   V_OSUSER      VARCHAR2 (30);
   V_MACHINE     VARCHAR2 (64);
   V_TERMINAL    VARCHAR2 (30);
   V_PROGRAM     VARCHAR2 (48);
   V_MODULE      VARCHAR2 (48);
   V_LOGONTIME   DATE;
   V_SQLTEXT     VARCHAR2 (1000);
   V_OUTPUT      VARCHAR2 (32767);
   V_ALTER       VARCHAR2 (500);
BEGIN
   V_OUTPUT := NULL;

   OPEN V_CURSOR FOR
      SELECT B.INST_ID,
             B.SID,
             B.SERIAL#,
             B.USERNAME,
             B.OSUSER,
             B.MACHINE,
             B.TERMINAL,
             B.PROGRAM,
             B.MODULE,
             B.LOGON_TIME,
             A.SQL_TEXT
        FROM GV$SQL A, GV$SESSION B
       WHERE     A.SQL_ID = B.SQL_ID
             AND A.INST_ID = B.INST_ID
             AND UPPER (A.SQL_TEXT) LIKE '%AR_PAYMENT_SCHEDULES_ALL%'
             AND UPPER (A.SQL_TEXT) NOT LIKE '%V$SESSION%';

   LOOP
      FETCH V_CURSOR
      INTO V_INST,
           V_SID,
           V_SERIAL,
           V_USERNAME,
           V_OSUSER,
           V_MACHINE,
           V_TERMINAL,
           V_PROGRAM,
           V_MODULE,
           V_LOGONTIME,
           V_SQLTEXT;

      EXIT WHEN V_CURSOR%NOTFOUND;
      V_OUTPUT :=
            V_OUTPUT
         || CHR (13)
         || CHR (10)
         || 'INSTANCE: '
         || V_INST
         || CHR (13)
         || CHR (10)
         || 'SID: '
         || V_SID
         || CHR (13)
         || CHR (10)
         || 'SERIAL#: '
         || V_SERIAL
         || CHR (13)
         || CHR (10)
         || 'USERNAME: '
         || V_USERNAME
         || CHR (13)
         || CHR (10)
         || 'OSUSER: '
         || V_OSUSER
         || CHR (13)
         || CHR (10)
         || 'MACHINE: '
         || V_MACHINE
         || CHR (13)
         || CHR (10)
         || 'TERMINAL: '
         || V_TERMINAL
         || CHR (13)
         || CHR (10)
         || 'PROGRAM: '
         || V_PROGRAM
         || CHR (13)
         || CHR (10)
         || 'MODULE: '
         || V_MODULE
         || CHR (13)
         || CHR (10)
         || 'LOGON TIME: '
         || TO_CHAR (V_LOGONTIME, 'DD/MM/RR HH:MI:SS')
         || CHR (13)
         || CHR (10)
         || 'SQL: '
         || V_SQLTEXT
         || CHR (13)
         || CHR (10)
         || '**************************************************';
   END LOOP;

   CLOSE V_CURSOR;

   IF V_OUTPUT IS NOT NULL
   THEN
      V_ALTER := 'ALTER SESSION SET SMTP_OUT_SERVER = ''[YourSMTPserver.YourDomain.com]''';

      EXECUTE IMMEDIATE V_ALTER;

      UTL_MAIL.SEND_ATTACH_VARCHAR2 (
         SENDER         => '[The sender's email address, can be fake]',
         RECIPIENTS     => '[The email addresses of the recipients, separated by commas]',
         SUBJECT        => 'SQL CAUGHT',
         ATT_FILENAME   => 'sessions.txt',
         ATTACHMENT     => V_OUTPUT);
   END IF;
EXCEPTION
   WHEN OTHERS
   THEN
      NULL;
END CHECKSQL;
/

Whenever a user accesses this table, I get an email with subject 'SQL CAUGHT' and an attached text file, named 'sessions.txt'.
A sample of this file is:
INSTANCE: 2
SID: 8125
SERIAL#: 7646
USERNAME: APPS
OSUSER: oracle
MACHINE: ecapps2
TERMINAL:
PROGRAM:
MODULE: XXIEX_GRP_DATA_PARALLEL
LOGON TIME: 01/12/11 12:15:18
SQL: SELECT /*+ use_nl(ps x) index_ffs(x XXIEX_SCORING_INVOICES_U1) */ :B1 ,X.PARTY_ID ,PS.CUSTOMER_TRX_ID ,PS.TRX_DATE ,PS.TRX_NUMBER ,ROUND(PS.AMOUNT_DUE_ORIGINAL * NVL(PS.EXCHANGE_RATE, 1), 2) AMOUNT_DUE_ORIGINAL ,PS.ACCTD_AMOUNT_DUE_REMAINING AMOUNT_DUE_REMAINING ,PS.DUE_DATE ,PS.ORG_ID ,XXIEX_IEX_CUSTOMER_SCORING_PKG.FUN_GET_APPL_AMOUNT(PS.CUSTOMER_TRX_ID, PS.DUE_DATE, :B2 ) FROM AR_PAYMENT_SCHEDULES_ALL PS ,XXIEX_SCORING_INVOICES X WHERE X.REQUEST_ID = :B1 AND PS.CUSTOMER_TRX_ID = X.CUSTOMER_TRX_ID AND PS.CLASS IN ('INV', 'DM', 'CM')
**************************************************
INSTANCE: 1
SID: 8111
SERIAL#: 53187
USERNAME: AFRAGOU
OSUSER: afragou
MACHINE: CENTRAL-DOMAIN\OTE-8B2-015
TERMINAL: OTE-8B2-015
PROGRAM: oracle@ecdb1 (P005)
MODULE: TOAD 9.7.2.5
LOGON TIME: 01/12/11 12:33:30
SQL: CREATE TABLE  AR_PS_WRONG_DIAKAN_PART012 AS (SELECT /*+  PARALLEL(A,6)    */        *   FROM apps.ar_payment_schedules_all partition(AR_PAYMENT_SCHEDULES_PART012)a WHERE        a.CLASS = 'DM'    AND a.status = 'OP'    AND EXISTS (           SELECT   b.customer_id, COUNT(*),b.trx_date,                    b.amount_due_original               FROM apps.ar_payment_schedules_all b              WHERE b.CLASS = 'DM'                AND b.customer_id = a.customer_id                AND b.trx_date = a.trx_date                AND b.amount_due_original = a.amount_due_original           GROUP BY b.customer_id, b.trx_date, b.amount_due_original             HAVING COUNT (b.customer_id) > 1                AND COUNT (TRUNC (b.trx_date)) > 1                AND COUNT (b.amount_due_original) > 1)                      )
**************************************************

Wednesday, November 16, 2011

OEM: "Tablespace Used (%)" alert is not triggered

There was an incident, where a tablespace filled completely, but I did not get any alert from OEM.

There is Oracle Note 403264.1:"Troubleshooting a Database Tablespace Used(%) Alert problem", where several issues about this alert are discussed.
In my case, there was no entry in DBA_OUTSTANDING_ALERTS,
although in DBA_TABLESPACE_USAGE_METRICS the particular tablespace had exceeded the thresholds I have set.

So, according to Section 4 in this note, this is probably caused by database bugs 7462407, 5245039.
Additionally, I have a "gap" in my v$tablespace.

Unfortunately, there are no patches for these bugs for Windows platforms, so I created a user defined metric to get these alerts.
The new metric was based on DBA_TABLESPACE_USAGE_METRICS, which is very nice, because it queries v$filespace_usage, which keeps track of any MAXSIZE value you may have set in your autoextended datafiles.
If it is set to UNLIMITED, then this value is equal to 32 GB for 8 KB block size (2^22-1 * 8 KB).

Keep in mind, columns referencing sizes in these 2 views are blocks.
So, if your tablespace has block size of 8 KB, multiply these values by 8 to get its size in KB.

Of course, there is no way for your database to know, if your filesystems will fill, e.g. if in your filesystem you have 10 GB of free space and an autoextended datafile in it with MAXSIZE 15 GB, you will not get any alert, even from the default OEM metric, when your filesystem's free space depletes.
You should use "Filesystem Space Available (%)" alert at host level, to check your filesystem's free space.

Tuesday, November 8, 2011

Direct Path INSERT requires exclucive lock

There is a table, where we insert rows using serial direct-path INSERT [INSERT /*+ APPEND */].
The INSERT command is executed periodically by a job.
Manually, I run a DELETE command on this table to purge rows no longer needed.
At some point, the job mentioned executes the direct-path INSERT and it gets blocked by the DELETE command with an "enq: TM - contention" lock.

INST_IDSIDID1ID2LMODEBLOCKREQUESTOBJECT_NAMELOCK_TYPEPROGRAM
280167604880310AUD$_BU_IDX2DMLsqlplus@ecdb2 (TNS V1-V3)
2--->82967604880006DMLoracle@ecdb2 (J001)

According to Oracle Database Administrator's Guide 10g Release 2 (10.2), this is normal behavior:

    During direct-path INSERT, the database obtains exclusive locks on the table (or on all partitions of a partitioned table). As a result, users cannot perform any concurrent insert, update, or delete operations on the table, and concurrent index creation and build operations are not permitted. Concurrent queries, however, are supported, but the query will return only the information before the insert operation.

Monday, November 7, 2011

ORA-27102, ORA-27100 on startup in a Windows 32bit platform

We changed parameter SGA_MAX_SIZE to 2GB in the spfile and we shutdown the database.
When we tried to startup, we got the following errors:
SQL*Plus: Release 10.2.0.1.0 - Production on Sun Nov 6 15:24:45 2011
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> startup
ORA-27102: out of memory
OSD-00022: additional error information
O/S-Error: (OS 8) Not enough storage is available to process this command.
SQL> startup
ORA-27100: shared memory realm already exists

According to Oracle Note 1080317.1 "Summary of Causes for ORA-27102 on Startup":

    If oracle version is 32bit, the maximum SGA_TARGET / SGA_MAX_SIZE is 1.75gb

To start the database, we had to:
1. Start/stop oracle service.
2. Use an older pfile, where SGA_MAX_SIZE had a valid value.

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Nov 6 15:53:03 2011
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> startup pfile='E:\ORACLE\product\10.2.0\db_1\database\INIToblicore.ORA'
ORACLE instance started.
Total System Global Area 1258291200 bytes
Fixed Size                  1250572 bytes
Variable Size             419433204 bytes
Database Buffers          830472192 bytes
Redo Buffers                7135232 bytes
Database mounted.
Database opened.

SQL> create spfile='E:\ORACLE\product\10.2.0\db_1\database\SPFILEOBLICORE.ORA' from pfile;
File created.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.
Total System Global Area 1258291200 bytes
Fixed Size                  1250572 bytes
Variable Size             419433204 bytes
Database Buffers          830472192 bytes
Redo Buffers                7135232 bytes
Database mounted.
Database opened. 

Wednesday, September 7, 2011

Find which SQL operations cause one pass and multipass work area executions

SELECT S.SQL_FULLTEXT,
       W.OPERATION_TYPE,
       W.ESTIMATED_OPTIMAL_SIZE,
       W.ESTIMATED_ONEPASS_SIZE,
       W.LAST_MEMORY_USED,
       W.LAST_EXECUTION,
       W.OPTIMAL_EXECUTIONS,
       W.ONEPASS_EXECUTIONS,
       W.MULTIPASSES_EXECUTIONS,
       W.LAST_TEMPSEG_SIZE,
       W.MAX_TEMPSEG_SIZE
  FROM V$SQL_WORKAREA W, V$SQL S
 WHERE     W.ADDRESS = S.ADDRESS
       AND W.HASH_VALUE = S.HASH_VALUE
       AND W.CHILD_NUMBER = S.CHILD_NUMBER
       AND (W.ONEPASS_EXECUTIONS > 0 OR W.MULTIPASSES_EXECUTIONS > 0);

Tuesday, July 19, 2011

Oracle Enterprise Manager 11g (11.1.0.1.0) agent installation using response file

Download and extract agent's binaries in a directory in your server, e.g. /var/opt/oragrid/binaries.
Open file /var/opt/oragrid/binaries/hpi/response/additional_agent.rsp and do the following changes:
SECURITY_UPDATES_VIA_MYORACLESUPPORT=FALSE
DECLINE_SECURITY_UPDATES=TRUE
#MYORACLESUPPORT_USERNAME=
#MYORACLESUPPORT_PASSWORD=
#COLLECTOR_SUPPORTHUB_URL=
ORACLE_AGENT_HOME_LOCATION=[directory where agent software will be installed, e.g. /var/opt/oragrid]
OMS_HOST=[server's name where OMS is installed]
OMS_PORT=[server's port, default 4889]
AGENT_REGISTRATION_PASSWORD=[sysman's password]

If you are installing an agent in a RAC, change also:
CLUSTER_NODES="[node1's name],[node2's name],..."

If you don't want to mess with server's oracle central inventory, use a custom inventory, by creating an oraInst.loc file, e.g. /var/opt/oragrid/oraInst.loc.
Open it and add:
inventory_loc=[path where this inventory will be placed, e.g. /var/opt/oragrid/oraInventory]
inst_group=[agent's user group, dba or oinstall]

Go to /var/opt/oragrid/binaries/hpi/agent/install and run:
runInstaller -invPtrLoc /var/opt/oragrid/oraInst.loc -silent -responseFile /var/opt/oragrid/binaries/hpi/response/additional_agent.rsp

If you are installing in a RAC, find cluster name:
cd $CRS_HOME/bin
./cemutlo -n

Now, run:
runInstaller -invPtrLoc /var/opt/oragrid/oraInst.loc -silent -responseFile /var/opt/oragrid/binaries/hpi/response/additional_agent.rsp -CLUSTER_NAME=[cluster name] CLUSTER_NODES="{[node1's name],[node2's name],...}"

There is a bug that fills emagent.trc pretty fast.
So, at the end, install patch 9756966.

Finally, run [agent's home]/root.sh as root.

In the RAC installation, all onoff patches that were supposed to be installed after agent's installation, failed, because although I had specified my custom oraInst.loc in the runInstaller command, OPatch was using oraInst.loc under agent's home directory.
Probably, you will not face this issue, if you don't specify a custom inventory.
If that happens also to you, go to [agent's home]/install/oneoffs/111010 and install them manually.

If you try to monitor a 9i database not in 9.2.0.8.0 patchset, you will probably get many ORA-07445 in the alert log and core dumps in your database's cdump directory.
To fix this, edit [agent's home]/sysman/config/emd.properties and add the parameter AvoidOCIPing=1.
Then, restart the agent.

Wednesday, July 6, 2011

Oracle Enterprise Manager 11g: Install & configure EM CLI in agents for failovers in Cold Failover Cluster

First of all, install agents in all cluster nodes, discover  and configure the targets running in them.
If a node fails and you move your oracle service to a working node, you would want to inform OMS for this relocation.
This is accomplished, using Enterprise Manager Command Line Interface (EM CLI).

Download emclikit.jar from your grid installation:

http(s)://[host]:[port]/em/console/emcli/download

The following must be executed for every agent deployed in cluster nodes.

Set your JAVA_HOME enviroment (Java 1.6.0 or greater is needed):
export JAVA_HOME=[java directory]
export PATH=$JAVA_HOME/bin:$PATH

Under agent's home create a "emcli" directory and put emclikit.jar in it.
Run:
java -jar emclikit.jar client -install_dir=[directory where emclikit.jar is placed]

Setup EM CLI:
emcli setup -url="http(s)://[host]:[port]/em" -username=sysman -password=[sysman password] -licans=YES -dir=[directory where emclikit.jar is placed] -trustall -localdirans=YES

You can check targets discovered by your OMS, running the following query:
SELECT * FROM SYSMAN.MGMT_TARGETS ORDER BY TARGET_TYPE;

Or by giving the following command:
emctl config agent listtargets

Now, lets say you relocate a target e.g. a database called TESTDB from node1 to node2.
If node1's agent is still running, you can leave everything as it is. An agent can monitor targets running in other nodes.
If node1 completely fails and its agent stops running, you should instruct OMS to monitor this target using another agent, so execute:
emcli relocate_targets -src_agent=node1:[agent's port, default 3872] -dest_agent=node2:[agent's port, default 3872] -target_name=TESTDB -target_type=oracle_database -copy_from_src -force=YES

If you get a "target [database instance, oracle_database] is broken: cannot compute dynamic properties in time" error:
emctl stop agent

Edit [agent's home]/sysman/config/emd.properties and change value of dynamicPropsComputeTimeout to 120.

emctl start agent
emctl upload agent

Find which queries use a specific segment

If you want to find out which of the cursors currently loaded in the library cache use a specific table or index:
SELECT
   P.INST_ID,
   P.OBJECT_NAME,
   P.OPERATION,
   P.OPTIONS,
   S.SQL_ID,
   S.MODULE,
   S.ACTION,
   S.PARSING_SCHEMA_NAME,
   S.SQL_TEXT
FROM
   GV$SQL_PLAN P,
   GV$SQL S
WHERE P.OBJECT_OWNER = 'AX'
AND P.OBJECT_NAME = 'AX_EVENTS_N1'
AND P.SQL_ID = S.SQL_ID
AND P.INST_ID = S.INST_ID;

You may also use the following query to get the result based on AWR data:
SELECT
   P.OBJECT_NAME,
   P.OPERATION,
   P.OPTIONS,
   S.SQL_ID,
   S.MODULE,
   S.ACTION,
   S.PARSING_SCHEMA_NAME,
   T.SQL_TEXT
FROM
   DBA_HIST_SQL_PLAN P,
   DBA_HIST_SQLSTAT S,
   DBA_HIST_SQLTEXT T
WHERE P.OBJECT_OWNER='AX'
AND P.OBJECT_NAME='AX_EVENTS_N1'
AND P.SQL_ID = S.SQL_ID
AND S.SQL_ID = T.SQL_ID;

Remember, by default AWR collects the top-N (default N=30) heaviest SQL queries in the library cache each time.
So, both queries cannot provide you a 100% way to monitor the usage of segments.

Wednesday, February 16, 2011

Use COMMENT command to invalidate cursors

I create a table which has in column LOCATION the value 'ATHENS' 99,998 times, the values 'PATRAS' and 'TRIPOLI' once.
I also create an index on column LOCATION.
CREATE TABLE EMPLOYEES
(
   EMPID      NUMBER (6),
   LOCATION   VARCHAR2 (10)
);

CREATE INDEX EMPLOYEES_IND
   ON EMPLOYEES (LOCATION);

BEGIN
   FOR I IN 1 .. 99998
   LOOP
      INSERT INTO EMPLOYEES
           VALUES (I, 'ATHENS');
   END LOOP;

   INSERT INTO EMPLOYEES
        VALUES (99999, 'PATRAS');

   INSERT INTO EMPLOYEES
        VALUES (100000, 'TRIPOLI');

   COMMIT;
END;
/

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'HELPDESK', TABNAME => 'EMPLOYEES', METHOD_OPT => 'For All Indexed Columns Size Auto', CASCADE => TRUE);

I run the following query.
SQL> CONN HELPDESK@ECDBTST
Enter password: ********
Connected.

SQL> VAR B1 VARCHAR2(10)
SQL> EXEC :B1 := 'ATHENS'
PL/SQL procedure successfully completed.

SQL> SELECT *
2 FROM EMPLOYEES
3 WHERE LOCATION=:B1;

EMPID LOCATION
--------------
9087 ATHENS
9088 ATHENS
9089 ATHENS
...
89414 ATHENS
89415 ATHENS
89416 ATHENS

99998 rows selected.

B1 has value 'ATHENS', so CBO should have favored a Full Table Scan (FTS), since I request 99,998 of 100,000 rows to be returned.

SELECT SQL_TEXT, SQL_ID
  FROM V$SQL
 WHERE SQL_TEXT LIKE 'SELECT * FROM EMPLOYEES%';
 
SQL_TEXTSQL_ID
SELECT * FROM EMPLOYEES WHERE LOCATION=:B1ffyj5j3gjjcpj

SELECT SQL_ID,
       NAME,
       DATATYPE_STRING,
       VALUE_STRING
  FROM V$SQL_BIND_CAPTURE
 WHERE SQL_ID = 'ffyj5j3gjjcpj';

SQL_IDNAMEDATATYPE_STRINGVALUE_STRING
ffyj5j3gjjcpj:B1VARCHAR(32)ATHENS

SELECT *
  FROM TABLE (DBMS_XPLAN.DISPLAY_CURSOR ('ffyj5j3gjjcpj', NULL, 'ADVANCED'));

SQL_ID ffyj5j3gjjcpj, child number 0
-------------------------------------
SELECT * FROM EMPLOYEES WHERE LOCATION=:B1

Plan hash value: 1342275408

---------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
---------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7 |
|* 1 | TABLE ACCESS FULL| EMPLOYEES | 98418 | 1057K| 7 |
---------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

1 - SEL$1 / EMPLOYEES@SEL$1

Outline Data
-------------

/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
OPT_PARAM('_b_tree_bitmap_plans' 'false')
OPT_PARAM('_optimizer_cost_model' 'io')
OPT_PARAM('_fast_full_scan_enabled' 'false')
OPT_PARAM('_optimizer_skip_scan_enabled' 'false')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
FULL(@"SEL$1" "EMPLOYEES"@"SEL$1")
END_OUTLINE_DATA
*/

Peeked Binds (identified by position):
--------------------------------------

1 - :B1 (VARCHAR2(30), CSID=37): 'ATHENS'

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("LOCATION"=:B1)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

1 - "EMPLOYEES"."EMPID"[NUMBER,22], "LOCATION"[VARCHAR2,10]

Note
-----
- cpu costing is off (consider enabling it)

CBO was correct and chose a FTS.
Now I run the same query, setting first B1 equal to 'PATRAS':
SQL> EXEC :B1 := 'PATRAS'
PL/SQL procedure successfully completed.

SQL> SELECT *
2 FROM EMPLOYEES
3 WHERE LOCATION=:B1;

EMPID LOCATION
--------------
99999 PATRAS

We would expect CBO to choose an Index Range Scan, since I request only 1 of 100,000 rows to be returned.

SELECT SQL_ID,
       NAME,
       DATATYPE_STRING,
       VALUE_STRING
  FROM V$SQL_BIND_CAPTURE
 WHERE SQL_ID = 'ffyj5j3gjjcpj';

SQL_IDNAMEDATATYPE_STRINGVALUE_STRING
ffyj5j3gjjcpj:B1VARCHAR(32)ATHENS

SELECT *
  FROM TABLE (DBMS_XPLAN.DISPLAY_CURSOR ('ffyj5j3gjjcpj', NULL, 'ADVANCED'));

SQL_ID ffyj5j3gjjcpj, child number 0
-------------------------------------
SELECT * FROM EMPLOYEES WHERE LOCATION=:B1

Plan hash value: 1342275408

---------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
---------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7 |
|* 1 | TABLE ACCESS FULL| EMPLOYEES | 98418 | 1057K| 7 |
---------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

1 - SEL$1 / EMPLOYEES@SEL$1

Outline Data
-------------

/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
OPT_PARAM('_b_tree_bitmap_plans' 'false')
OPT_PARAM('_optimizer_cost_model' 'io')
OPT_PARAM('_fast_full_scan_enabled' 'false')
OPT_PARAM('_optimizer_skip_scan_enabled' 'false')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
FULL(@"SEL$1" "EMPLOYEES"@"SEL$1")
END_OUTLINE_DATA
*/

Peeked Binds (identified by position):
--------------------------------------

1 - :B1 (VARCHAR2(30), CSID=37): 'ATHENS'

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("LOCATION"=:B1)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

1 - "EMPLOYEES"."EMPID"[NUMBER,22], "LOCATION"[VARCHAR2,10]

Note
-----
- cpu costing is off (consider enabling it)

Unfortunately, even now a FTS is chosen.
You may also noticed B1 peek bind remains 'ATHENS'.
According to Oracle Note 430208.1: "Bind Peeking By Example":

    The query optimizer peeks at the values of user-defined bind variables on the first invocation of a cursor. This feature lets the optimizer determine the selectivity of any WHERE clause condition, as well as if literals have been used instead of bind variables.
    On subsequent invocations of the cursor, no peeking takes place, and the cursor is shared, based on the standard cursor-sharing criteria, even if subsequent invocations use different bind values.

So, the cursor should be invalidated, if you want a new execution plan to be chosen.
According to the same note:

    To invalidate any cursor, one of the followings can be done:

    1. Issue the command Alter System Flush Shared_Pool;
    2. Delete or change the cursor underlying objects statistics
    3. Grant and/or revoke privileges on any of the cursor underlying objects
    4. Alter cursor underlying objects
    5. Bounce the instance

All these choices are usually very difficult to be used in a 24/7 production system.
A much easier way to invalidate cursors is to insert a simple comment in one of the tables used by your query.

SQL> COMMENT ON TABLE EMPLOYEES IS 'TEST';
Comment created.

Now, the cursor is invalidated:
SELECT SQL_TEXT, SQL_ID
  FROM V$SQL
 WHERE SQL_TEXT LIKE 'SELECT * FROM EMPLOYEES%';
no rows selected

SELECT *
  FROM TABLE (DBMS_XPLAN.DISPLAY_CURSOR ('ffyj5j3gjjcpj', NULL, 'ADVANCED'));
SQL_ID: ffyj5j3gjjcpj cannot be found

I run again the query with B1='PATRAS':
SQL> EXEC :B1 := 'PATRAS'
PL/SQL procedure successfully completed.

SQL> SELECT *
2 FROM EMPLOYEES
3 WHERE LOCATION=:B1;

EMPID LOCATION
--------------
99999 PATRAS

Let's see the execution plan:
SELECT *
  FROM TABLE (DBMS_XPLAN.DISPLAY_CURSOR ('ffyj5j3gjjcpj', NULL, 'ADVANCED'));

SQL_ID ffyj5j3gjjcpj, child number 0
-------------------------------------
SELECT * FROM EMPLOYEES WHERE LOCATION=:B1

Plan hash value: 211055403

-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 18 | 198 | 2 |
|* 2 | INDEX RANGE SCAN | EMPLOYEES_IND | 18 | | 1 |
-----------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

1 - SEL$1 / EMPLOYEES@SEL$1
2 - SEL$1 / EMPLOYEES@SEL$1

Outline Data
-------------

/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
OPT_PARAM('_b_tree_bitmap_plans' 'false')
OPT_PARAM('_optimizer_cost_model' 'io')
OPT_PARAM('_fast_full_scan_enabled' 'false')
OPT_PARAM('_optimizer_skip_scan_enabled' 'false')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX_RS_ASC(@"SEL$1" "EMPLOYEES"@"SEL$1" ("EMPLOYEES"."LOCATION"))
END_OUTLINE_DATA
*/

Peeked Binds (identified by position):
--------------------------------------

1 - :B1 (VARCHAR2(30), CSID=37): 'PATRAS'

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("LOCATION"=:B1)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

1 - "EMPLOYEES"."EMPID"[NUMBER,22], "LOCATION"[VARCHAR2,10]
2 - "EMPLOYEES".ROWID[ROWID,10], "LOCATION"[VARCHAR2,10]

Note
-----
- cpu costing is off (consider enabling it)

We can see CBO used an Index Range Scan this time.
The good thing is, although COMMENT is a DDL statement, no invalidations will happen to objects dependent on the table you just inserted the comment.

Friday, February 11, 2011

10g & 11g: In new patchsets auditing by session is "obsolete"

We create a table:
CREATE TABLE HELPDESK.AUD_TEST(TEST NUMBER);

We start auditing user helpdesk "insert table" statements by session:
AUDIT INSERT TABLE BY HELPDESK BY SESSION;

We verify it:
SELECT *
  FROM DBA_STMT_AUDIT_OPTS A
 WHERE A.USER_NAME = 'HELPDESK';

  
USER_NAMEPROXY_NAMEAUDIT_OPTIONSUCCESSFAILURE
HELPDESKINSERT TABLEBY SESSIONBY SESSION

We open a sqlplus, connect as helpdesk and insert 6 rows in aud_test table:
SQL> CONN HELPDESK@ECDB1
Enter password: ********
Connected.

SQL> INSERT INTO AUD_TEST VALUES (1);
1 row created.

SQL> COMMIT;
Commit complete.

SQL> INSERT INTO AUD_TEST VALUES (1);
1 row created.

SQL> INSERT INTO AUD_TEST VALUES (1);
1 row created.

SQL> INSERT INTO AUD_TEST VALUES (1);
1 row created.

SQL> INSERT INTO AUD_TEST VALUES (1);
1 row created.

SQL> INSERT INTO AUD_TEST VALUES (1);
1 row created.

SQL> COMMIT;
Commit complete.

We query aud$:
SELECT SESSIONID,
       USERID,
       ACTION#,
       OBJ$CREATOR,
       OBJ$NAME,
       SES$ACTIONS,
       NTIMESTAMP#
  FROM SYS.AUD$
 WHERE USERID = 'HELPDESK';

SESSIONIDUSERIDACTION#OBJ$CREATOROBJ$NAMESES$ACTIONSNTIMESTAMP#
290823972HELPDESK103HELPDESKAUD_TEST------S---------11/2/2011 08:48:17
290823972HELPDESK103HELPDESKAUD_TEST------S---------11/2/2011 08:48:19
290823972HELPDESK103HELPDESKAUD_TEST------S---------11/2/2011 08:48:20
290823972HELPDESK103HELPDESKAUD_TEST------S---------11/2/2011 08:48:21
290823972HELPDESK103HELPDESKAUD_TEST------S---------11/2/2011 08:48:21
290823972HELPDESK103HELPDESKAUD_TEST------S---------11/2/2011 08:47:49

You would expect to see only one row, for the 6 insert statements executed by one session in the same table.
But, instead, we get 6 rows in the AUD$ table, as if we had chosen the "by access" auditing method.
According to Oracle Note 6023472.8:

    Bug 6023472 Collection of changes to auditing

    This issue is fixed in:
    10.2.0.5 (Server Patch Set)
    11.1.0.7 (Server Patch Set)
    11.2.0.1 (Base Release)

    This fix covers several changes to audit behaviour:

    1. The LOGON (action# = 100) audit records are no longer updated. So, when a session ends and if sessions are audited, a LOGOFF (action# = 101) audit record will be written as a separate audit record.
    2. Auditing configured as 'By Session' will no longer produce one audit record per session. Each occurrence of the audit event will result in newer audit records since no audit record is updated.
    3. The index SYS.I_AUD1 on the AUD$ table is dropped since there are no longer any updates to AUD$ with this fix.
    4. FGA_LOG$ will not have PLHOL column.
    5. OS/XML Audit filenames will have a filename format of the form:
    ___.
    6. The default value for Max Audit FileSize is 1000KB and Max Audit FileAge is 1 Day. This means that once one of these max limits are reached, the audit records will be written in a new audit file with same "___" part, but with a new "".

According to (2), auditing by session is, in essence, no longer supported for these patchsets and onwards.

Wednesday, January 19, 2011

RAC: Library Cache locks and Pins

Scripts presented here and here are not very helpful for RAC environments, because x$ tables are local to instances and if the blocker is in a different instance than the blocked, you will get no results.
SELECT INST_ID,
       HANDLE,
       GRANT_LEVEL,
       REQUEST_LEVEL,
       RESOURCE_NAME1,
       RESOURCE_NAME2,
       PID,
       TRANSACTION_ID0,
       TRANSACTION_ID1,
       OWNER_NODE,
       BLOCKED,
       BLOCKER,
       STATE
  FROM GV$GES_BLOCKING_ENQUEUE;

Querying GV$GES_BLOCKING_ENQUEUE you will see in RESOURCE_NAME1 column, in a library cache lock entries like "[0x1b370d][0x81b0000],[LA]" and in library cache pins "[0x1b370d][0x81b0000],[NA]".
The range for library cache locks is LA to LZ and for library cache pins NA to NZ.

The blocker is the session with value "1" in the BLOCKER column.

Tuesday, January 18, 2011

Move huge database's datafiles to new filesystems

Our eBS database is 14+ TB/2100+ datafiles big and we have to move all its datafiles to new filesystems, taking into account their physical reads, so no more than 1 heavily read datafile would be per filesystem.
We will be given 100 200GB new filesystems and we will leave 10GB empty in each, so only 190GB per filesystem will be used.
The old naming convention is "/oracleXXX/oradata/" and the new will be "/oraerpXXX/oradata".

A manual approach of this case is almost impossible to be implemented for a database of this size, even if we do not consider the "physical reads" clause.

To order our datafiles by physical reads (or writes) we can use the following script, which uses AWR snapshots (10g+):
SELECT A.FILENAME,
         D.TABLESPACE_NAME,
         D.CONTENTS,
         E.BYTES,
         NVL (SUM (A.PHYRDS), 0) READS,
         NVL (SUM (A.PHYWRTS), 0) WRITES
    FROM DBA_HIST_FILESTATXS A,
         V$TABLESPACE C,
         DBA_TABLESPACES D,
         DBA_DATA_FILES E
   WHERE     A.SNAP_ID = 15400   --LAST SNAPSHOT
         AND A.TS# = C.TS#
         AND C.NAME = D.TABLESPACE_NAME
         AND A.FILENAME = E.FILE_NAME
GROUP BY A.FILENAME,
         D.TABLESPACE_NAME,
         D.CONTENTS,
         E.BYTES
UNION
  SELECT A.FILENAME,
         D.TABLESPACE_NAME,
         D.CONTENTS,
         E.BYTES,
         NVL (SUM (A.PHYRDS), 0) READS,
         NVL (SUM (A.PHYWRTS), 0) WRITES
    FROM DBA_HIST_TEMPSTATXS  A,
         V$TABLESPACE C,
         DBA_TABLESPACES D,
         DBA_TEMP_FILES E
   WHERE     A.SNAP_ID = 15400   --LAST SNAPSHOT
         AND A.TS# = C.TS#
         AND C.NAME = D.TABLESPACE_NAME
         AND A.FILENAME = E.FILE_NAME
GROUP BY A.FILENAME,
         D.TABLESPACE_NAME,
         D.CONTENTS,
         E.BYTES
ORDER BY 5 DESC;

Or we may use a script reading GV$FILESTAT, GV$TEMPSTAT, which keep statistics since instance startup:
SELECT B.FILE_NAME,
         C.TABLESPACE_NAME,
         C.CONTENTS,
         B.BYTES,
         NVL (SUM (A.PHYRDS), 0) READS,
         NVL (SUM (A.PHYWRTS), 0) WRITES
    FROM GV$FILESTAT A, DBA_DATA_FILES B, DBA_TABLESPACES C
   WHERE A.FILE# = B.FILE_ID AND B.TABLESPACE_NAME = C.TABLESPACE_NAME
GROUP BY B.FILE_NAME,
         C.TABLESPACE_NAME,
         C.CONTENTS,
         B.BYTES
UNION
  SELECT B.FILE_NAME,
         C.TABLESPACE_NAME,
         C.CONTENTS,
         B.BYTES,
         NVL (SUM (A.PHYRDS), 0) READS,
         NVL (SUM (A.PHYWRTS), 0) WRITES
    FROM GV$TEMPSTAT A, DBA_TEMP_FILES B, DBA_TABLESPACES C
   WHERE A.FILE# = B.FILE_ID AND B.TABLESPACE_NAME = C.TABLESPACE_NAME
GROUP BY B.FILE_NAME,
         C.TABLESPACE_NAME,
         C.CONTENTS,
         B.BYTES
ORDER BY 5 DESC;                 --6 for WRITES

First, we create these tables:
CREATE TABLE HELPDESK.DISKS
(
   DISK_ID     NUMBER,
   SIZE_MAX    NUMBER,
   SIZE_USED   NUMBER,
   PATH_NAME   VARCHAR2 (513 BYTE)
)
TABLESPACE HELPDESK
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (INITIAL 128 K
         NEXT 128 K
         MINEXTENTS 1
         MAXEXTENTS UNLIMITED
         PCTINCREASE 0
         BUFFER_POOL DEFAULT)
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;

CREATE TABLE HELPDESK.DFILES
(
   DISK_ID           NUMBER,
   NAME              VARCHAR2 (513 BYTE),
   NAME_NEW          VARCHAR2 (513 BYTE),
   BYTES             NUMBER,
   TABLESPACE_NAME   VARCHAR2 (30 BYTE),
   CONTENTS          VARCHAR2 (9 BYTE),
   READS             NUMBER,
   WRITES            NUMBER,
   BACKUP_COM        VARCHAR2 (2000 BYTE),
   ALTER_COM         VARCHAR2 (2000 BYTE)
)
TABLESPACE HELPDESK
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (INITIAL 128 K
         NEXT 128 K
         MINEXTENTS 1
         MAXEXTENTS UNLIMITED
         PCTINCREASE 0
         BUFFER_POOL DEFAULT)
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;

GRANT ALL ON HELPDESK.DISKS TO SYSTEM;

GRANT ALL ON HELPDESK.DFILES TO SYSTEM;

In DISKS table, we will keep record of how much size of each new filesystem will be used (SIZE_USED).
In DFILES, we will get the new datafiles' location (NAME_NEW) and we will, automatically, create the commands needed for NetBackup to restore them to their new locations (BACKUP_COM) and the "ALTER DATABASE RENAME FILE [old datafile location] TO [new datafile location]" commands (ALTER_COM).

We create and run the following procedure:
CREATE OR REPLACE PROCEDURE SYSTEM.DISK_GROUPING
IS
   --ORDERING IN THESE CURSORS IS NEEDED, IF YOU WANT TO TAKE INTO ACCOUNT THE PHYSICAL READS
   CURSOR C1
   IS
        SELECT NAME, BYTES, TABLESPACE_NAME
          FROM HELPDESK.DFILES
      ORDER BY READS DESC;

   CURSOR C2
   IS
        SELECT DISK_ID,
               SIZE_MAX,
               SIZE_USED,
               PATH_NAME
          FROM HELPDESK.DISKS
      ORDER BY SIZE_USED;
BEGIN
   EXECUTE IMMEDIATE 'TRUNCATE TABLE HELPDESK.DISKS';

   FOR I IN 1 .. 100
   LOOP        --CREATE 100 DISKS WITH 190GB (204010946560 bytes) USABLE SPACE
      INSERT INTO HELPDESK.DISKS (DISK_ID,
                                  SIZE_MAX,
                                  SIZE_USED,
                                  PATH_NAME)
           VALUES (I,
                   204010946560,
                   0,
                   '/oraerp' || I || '/oradata/');
   END LOOP;

   EXECUTE IMMEDIATE 'TRUNCATE TABLE HELPDESK.DFILES';

   INSERT INTO HELPDESK.DFILES (NAME,
                                TABLESPACE_NAME,
                                CONTENTS,
                                BYTES,
                                READS,
                                WRITES)        --PUT DATAFILES IN DFILES TABLE
        SELECT B.FILE_NAME,
               C.TABLESPACE_NAME,
               C.CONTENTS,
               B.BYTES,
               NVL (SUM (A.PHYRDS), 0) READS,
               NVL (SUM (A.PHYWRTS), 0) WRITES
          FROM GV$FILESTAT A, DBA_DATA_FILES B, DBA_TABLESPACES C
         WHERE A.FILE# = B.FILE_ID AND B.TABLESPACE_NAME = C.TABLESPACE_NAME
      GROUP BY B.FILE_NAME,
               C.TABLESPACE_NAME,
               C.CONTENTS,
               B.BYTES;

   INSERT INTO HELPDESK.DFILES (NAME,
                                TABLESPACE_NAME,
                                CONTENTS,
                                BYTES,
                                READS,
                                WRITES)        --PUT TEMPFILES IN DFILES TABLE
        SELECT B.FILE_NAME,
               C.TABLESPACE_NAME,
               C.CONTENTS,
               B.BYTES,
               NVL (SUM (A.PHYRDS), 0) READS,
               NVL (SUM (A.PHYWRTS), 0) WRITES
          FROM GV$TEMPSTAT A, DBA_TEMP_FILES B, DBA_TABLESPACES C
         WHERE A.FILE# = B.FILE_ID AND B.TABLESPACE_NAME = C.TABLESPACE_NAME
      GROUP BY B.FILE_NAME,
               C.TABLESPACE_NAME,
               C.CONTENTS,
               B.BYTES;

   COMMIT;

   FOR DFILE IN C1
   LOOP
      FOR DISK IN C2
      LOOP
         IF DFILE.BYTES < DISK.SIZE_MAX - DISK.SIZE_USED
         THEN
            UPDATE HELPDESK.DFILES
               SET DISK_ID = DISK.DISK_ID,
                   NAME_NEW =
                      DISK.PATH_NAME
                      || SUBSTR (DFILE.NAME,
                                 INSTR (DFILE.NAME,
                                        '/',
                                        3,
                                        3)
                                 + 1)
             WHERE NAME = DFILE.NAME;

            UPDATE HELPDESK.DFILES
               SET BACKUP_COM = 'change ' || NAME || ' to ' || NAME_NEW,
                   ALTER_COM =
                         'ALTER DATABASE RENAME FILE '''
                      || NAME
                      || ''' TO '''
                      || NAME_NEW
                      || ''';'
             WHERE NAME = DFILE.NAME;

            UPDATE HELPDESK.DISKS
               SET SIZE_USED = SIZE_USED + DFILE.BYTES
             WHERE DISK_ID = DISK.DISK_ID;

            COMMIT;
            EXIT;
         END IF;
      END LOOP;
   END LOOP;
END DISK_GROUPING;
/

Now, querying DISKS, we see:
DISK_IDSIZE_MAXSIZE_USEDPATH_NAME
1204,010,946,560155,508,015,104/oraerp1/oradata/
2204,010,946,560153,190,662,144/oraerp2/oradata/
3204,010,946,560151,382,917,120/oraerp3/oradata/
4204,010,946,560164,322,344,960/oraerp4/oradata/
............
98204,010,946,560151,424,925,696/oraerp98/oradata/
99204,010,946,560151,666,032,640/oraerp99/oradata/
100204,010,946,560152,240,717,824/oraerp100/oradata/

So, for instance, in the "/oraerp3/oradata/" 141GB of total 200GB will be used.

Querying DFILES (SELECT DISK_ID, NAME, NAME_NEW, READS FROM HELPDESK.DFILES ORDER BY DISK_ID, READS DESC;), we see:
DISK_IDNAMENAME_NEWREADS
1/oracle89/oradata/ar_payment_schedules_all_011.dbf/oraerp1/oradata/ar_payment_schedules_all_011.dbf2,235,050,034
1/oracle36/oradata/xxra_interfacex05.dbf/oraerp1/oradata/xxra_interfacex05.dbf26,396,759
1/oracle8/oradata/user_data18.dbf/oraerp1/oradata/user_data18.dbf20,936,923
1/oracle87/oradata/user_index81.dbf/oraerp1/oradata/user_index81.dbf19,653,078
1/oracle78/oradata/ar_receivable_applications_all_007.dbf/oraerp1/oradata/ar_receivable_applications_all_007.dbf9,722,306
1/oracle92/oradata/ax_sle_lines_078.dbf/oraerp1/oradata/ax_sle_lines_078.dbf7,687,651
1/oracle78/oradata/czx26.dbf/oraerp1/oradata/czx26.dbf2.474.694
............

Observe how datafiles are spread in each filesystem, according to physical reads.

CAUTION: If you have, for instance, 2 or more datafiles, e.g. named "user_data18.dbf" in different old filesystems, you must check these datafiles will not be placed in the same new filesystem.
This procedure does not guarantee no overlaps will occur.