As instances allocate temp space, they soft reserve the space in their private SGA. When a process runs out of any new allocatable space in the temp tablespace, it requests the other instances in the cluster to free up any unused space. This is done through a Cross Instance Call. the CIC is protected by the SS enqueue. The SS enqueue is taken on the tablespace (p1) and instance (p2). The request for free space is done by:
1. Taking SS enqueue on the tablespace / instance. this means any process trying to steal space on behalf of this tablespace will block on this enqueue.
2. CIC. The space-release-CIC serializes on the CIC type. Ie., On the entire node any process trying to steal space on behalf of ANY tablespace will have to wait for the CIC to complete. Note that even if you create 10 temp tablespaces, only one CIC can be outstanding at any time.
a. Under the CIC, the SMONs in the other instances release the free space if any and acknowledge the completion of the CIC.
b. When all SMONs acknowledge, the CIC completes.
The above process may hung resulting to temp space allocation imbalance.
A big batch process running on Node 2 could consume a big portion of the temporary tablespace.
Sessions running on Node 1 of need for temp/sort segments, cannot allocate them, since there is not any available for this node,
so they request them from Node 2 [SS enqueue], which cannot give them back since the batch process is still running.
Querying GV$SORT_SEGMENT shows FREE_BLOCKS=0.
The enqueue is shown at GV$SESSION_WAIT:
SELECT A.INST_ID, A.SID, B.MODULE, A.SEQ#, A.EVENT, A.P1TEXT, A.P1, A.P1RAW, A.P2TEXT, A.P2, A.P2RAW, A.P3TEXT, A.P3, A.P3RAW, A.WAIT_TIME, A.SECONDS_IN_WAIT, A.STATE, B.SERIAL#, B.USERNAME, B.OSUSER, B.PADDR, B.LOGON_TIME, B.PROCESS, B.SQL_HASH_VALUE, B.SADDR, B.ROW_WAIT_OBJ#, B.ROW_WAIT_FILE#, B.ROW_WAIT_BLOCK#, B.ROW_WAIT_ROW# FROM GV$SESSION_WAIT A, GV$SESSION B WHERE A.INST_ID = B.INST_ID AND A.SID = B.SID AND B.USERNAME IS NOT NULL AND B.TYPE <> 'BACKGROUND' AND A.EVENT IN ('db file sequential read', 'db file scattered read', 'latch free', 'direct path read', 'direct path write', 'enqueue', 'library cache pin', 'library cache load lock', 'buffer busy waits', 'free buffer waits', 'library cache lock', 'latch: cache buffers chains') ORDER BY A.SECONDS_IN_WAIT DESC;
|1||81||? @ecapps1 (TNS V1-V3)||13883||enqueue||name|mode||1397948422||53530006||id1||2||2||id2||2||2||0||2124||WAITING||18872||APPS|
|1||94||JDBC Thin Client||13883||enqueue||name|mode||1397948422||53530006||id1||2||2||id2||2||2||0||2124||WAITING||13167||APPS|
P1 can be decoded and show us the enqueue type:
SELECT CHR (BITAND (1397948422, --P1 -16777216) / 16777215) || CHR (BITAND (1397948422, --P1 16711680) / 65535) LOCK_TYPE, MOD (1397948422, --P1 16) LOCK_MODE FROM DUAL;
Even after the batch process has finished at Node 2 and the blocks in the temporary tablespace are freed, there is no reallocation of them to Node 1, so that the blocked sessions will be served.
These sessions, and every new requesting space in the temporary tablespace at Node 1, will remain blocked.
Bouncing the database will not resolve the issue, since RAC “remembers” how the temporary tablespace was shared between the nodes at the point of the shutdown, so in a few minutes you will be facing the same problem.
You should create a new temporary tablespace (e.g. TEMP2) and set it as “default temporary” for the database:
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP2;
After that you should kill all the sessions blocked by a SS enqueue.
If your database is 10gR2, you may have at your disposal another way to solve this:
From 10.2.0.3 onwards, one off patch is available for 10.1.0.5 as part of Bug 4882834 a manual method is provided to free up temp space from local instance's cache. Each invocation of the command will free up a maximum of (100 extents, 0.1% of cached free space).
The syntax is:
ALTER SESSION SET EVENTS 'immediate trace name drop_segments level tablespace_number+1';
If tablespace_number+1 is of tablespace group, then space will be freed from all the tablespaces in the group. The command impacts the local instance's cache.