Thursday 30 October 2014

ORA-01552: cannot use system rollback segment for non-system tablespace 'USERS'

ORA-01552: cannot use system rollback segment for non-system tablespace 'USERS'

1) find the status of your rollback segments by using the following query. You will find that your tablespace status is offline. Alter the tablespace to online.

SQL> select segment_name, tablespace_name, initial_extent,status
2 from dba_rollback_segs;

SEGMENT_NAME TABLESPACE_NAME INITIAL_EXTENT STATUS
--------------            ---------------                --------------           -------
SYSTEM                  SYSTEM                       57344                      ONLINE
ASHOKAN                 SYSTEM                       16384                      OFFLINE


2) alter rollback segment to online status.

SQL> alter rollback segment ASHOKAN online;
Rollback segment altered.

Now, you should be able to create table and manipulate your tables. To the following to see whether your undo rollback segments are MANUAL or AUTO..

SQL> show parameter undo

NAME                                            TYPE       VALUE
------------------------------------ ----------- --------------------
undo_management                           string          MANUAL

ALTER SYSTEM SET UNDO_MANAGEMENT=AUTO scope=spfile;

3) If the value of undo_management is AUTO, then do the following.

Create an UNDO tablespace to automatically create UNDO segments set online immediately.

SQL> create undo tablespace UNDOTBS2
2 datafile '/oracle/ora11/oradata/orcl/undotbs01.dbf' size 10M;
Tablespace created.

And remove your Manual UNDO Rollback segments.

SQL> create table mytable (mycolumns number) tablespace users;
Table created.

Now, still you should not have any problem.

Oracle recommends you to use the Automatic Undo for your rollback segments. If the database is set in Automatic Undo Management (AUM) mode - it must have at least one UNDO tablespace.

Notice that when the database is using Automatic Undo Management, but no UNDO tablespace is available at the time the error was received.

You could find out when the database is using Automatic Undo Management by to check the following parameters:

SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- --------------------
undo_management string AUTO
undo_retention integer 900
undo_suppress_errors boolean TRUE
undo_tablespace string

Wednesday 30 July 2014

ORA-00445: background process "m001" did not start after 120 seconds ---ORA-00445: background process "J003" did not start after 120 seconds

ORA-00445: background process "m001" did not start after 120 seconds 
    Incident details in: /app/u01/app/oracle/diag/rdbms/incident/incdir_3721/db1_mmon_7417_i3721.trc 
    ERROR: Unable to normalize symbol name for the following short stack (at offset 2): 
    Tue Jun 21 03:03:06 2011 
    ORA-00445: background process "J003" did not start after 120 seconds 
    or 
    Waited for process W002 to initialize for 60 seconds 
    The system appears to be running very slowly and defunct processes can appear. 
    Changes 
    REDHAT 5 kernel 2.6.18-194.el5 #1 SMP Tue Mar 16 
    Oracle 11.2.0.2 Single Instance 
    IBM: Linux on System z 
    Cause 
    Recent linux kernels have a feature called Address Space Layout Randomization (ASLR). 
    ASLR  is a feature that is activated by default on some of the newer linux distributions. 
    It is designed to load shared memory objects in random addresses. 
    In Oracle, multiple processes map a shared memory object at the same address across the processes. 
     
    With ASLR turned on Oracle cannot guarantee the availability of this shared memory address. 
    This conflict in the address space means that a process trying to attach a shared memory object to a specific address may not be able to do so, resulting in a failure in shmat subroutine. 
     
    However, on subsequent retry (using a new process) the shared memory attachment may work. 
    The result is a random set of failures 
    Solution 
    It should be noted that this problem has only been positively diagnosed in Redhat 5 and Oracle 11.2.0.2. 
    It is also likely, as per unpublished BUG:8527473,  that this issue will reproduce running on Generic Linux platforms running  any Oracle 11.2.0.x. on Redhat/OEL kernels which have ASLR.  
     
    ASLR also exists in SLES10 and SLES 11 kernels and by default ASLR is turned on, however, to date not problem has been seen on SuSE. 
     
    You can verify whether ASLR is being used as follows: 
     
     # /sbin/sysctl -a | grep randomize 
    kernel.randomize_va_space = 1 
     
    If the parameter is set to any value other than 0 then ASLR is in use. 
    On Redhat 5 to permanently disable ASLR. 
    add/modify this parameter in /etc/sysctl.conf 
    kernel.randomize_va_space=0 
    kernel.exec-shield=0 
     
    You need to reboot for kernel.exec-shield parameter to take effect.  
    Note that both kernel parameters are required for ASLR to be switched off. 

Friday 30 May 2014

ORA-00600: internal error code, arguments: [kcratr_scan_lastbwr], [], [], []

ORA-00600: internal error code, arguments: [kcratr_scan_lastbwr], [], [], []


The Problem:

Oracle Database remains in mounted state. It can't be opened. If it is focibly opened it throws error (ora-00600) as shown below:

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [kcratr_scan_lastbwr], [], [], [],
[], [], [], [], [], [], [], []

SOLUTION:

Use the command- 'recover database' as shown below:

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 2.3517E+10 bytes
Fixed Size                  2213456 bytes
Variable Size            1.4496E+10 bytes
Database Buffers         8858370048 bytes
Redo Buffers              160751616 bytes
Database mounted.
SQL> recover database;
Media recovery complete.

SQL> alter database open;

Database altered.

Monday 7 April 2014

Bulk collect is the bulk binding syntax for select statements.(Performance tuning - PL/SQL)


Bulk Collect


Executing sql statements in plsql programs causes a context switch between the plsql engine and the sql engine.
Too many context switches may degrade performance dramatically. In order to reduce the number of these context
switches we can use a feature named bulk binding. Bulk binding lets us to transfer rows between the sql engine and the plsql engine as collections.
Bulk binding is available for select, insert, delete and update statements.

Bulk collect is the bulk binding syntax for select statements. All examples below are simplified versions of the live cases i have seen.

One of the things i usually come across is that developers usually tend to use cursor for loops to process data. They declare a cursor, open it, fetch from it row by row in a loop and process the row they fetch.


SQL> create table a_all_objects as select * from all_objects;

Table created.

SQL> select count(*) from a_all_objects;

  COUNT(*)
----------
     90154

SQL> insert into a_all_objects select * from a_all_objects;

90154 rows created.

SQL> /

180308 rows created.

SQL> commit;

Commit complete.

SQL> set timing on
SQL> declare
cursor c1 is select OBJECT_NAME from a_all_objects;
      rec1 c1%rowtype;
  begin
  open c1;
  loop
     fetch c1 into rec1;
     exit when c1%notfound;

     null;

  end loop;
  end;
  /

PL/SQL procedure successfully completed.
Elapsed: 00:00:06.17
SQL>
SQL> declare
 cursor c1 is select OBJECT_NAME from a_all_objects;
  type c1_type is table of c1%rowtype;
  rec1 c1_type;
  begin
  open c1;

     fetch c1 bulk collect into rec1;


  end;
  /

PL/SQL procedure successfully completed.
Elapsed: 00:00:00.37
SQL>
SQL>
SQL> alter system flush buffer_cache;

System altered.

As can be clearly seen, bulk collecting the rows shows a huge performance improvement over fetching row by row.

The above method (which fetched all the rows) may not be applicable to all cases. When there are many rows to process,
 we can limit the number of rows to bulk collect, process those rows and fetch again. Otherwise process memory gets bigger and bigger as you fetch the rows.


SQL> declare
cursor c1 is select object_name from a_all_objects;
  type c1_type is table of c1%rowtype;
  rec1 c1_type;
  begin
  open c1;
  loop
     fetch c1 bulk collect into rec1 limit 200;
     for i in 1..rec1.count loop
             null;
     end loop;
     exit when c1%notfound;
  end loop;
  end;
  /
PL/SQL procedure successfully completed.

Elapsed: 00:00:00.35
SQL>

Monday 24 March 2014

ORACLE: sql - JOIN vs IN vs EXISTS and JOIN vs NOT IN vs NOT EXISTS the logical difference

ORACLE: sql - JOIN vs IN vs EXISTS  and JOIN vs NOT IN vs NOT EXISTS the logical difference

There is a common misconception that IN behaves equaliy to EXISTS or JOIN in terms of returned results.

This is simply not true. To see why not, let's review what each statement does.



IN:

Returns true if a specified value matches any value in a subquery or a list.

Exists:

Returns true if a subquery contains any rows.

Join:

Joins 2 resultsets on the joining column.



If not read carefully it looks pretty same so far, doesn't it.

The difference comes when you take into account the "dreaded" THREE-VALUED LOGIC.

Let's review that also:


-- this is true
SELECT 1 from dual  WHERE 1 = 1;

-- this is false
SELECT 1 from dual  WHERE 1 = 0;

-- this is unknown - it is usually expected to be false, but that only shows
-- misunderstanding of nulls. It's not false it's only treated as false in the filter
SELECT 1 from dual  WHERE 1 = NULL;


-- this is also unknown - but logicaly it would seem it would be true
-- but unknown compared to unknown equals uknown and it is treated as false in the filter
SELECT 1 from dual  WHERE NULL = NULL;

The where will return a row only if the condition evaluates to true which UNKNOWN isn't.



So let's demonstrate this with some code. I've commented it heavily so it can be self explaining.

I've also shown a parser bug in the IN Query. Well... it's a bug if you ask me. It shouldn't behave like that.



------------------------------------------------------------------
-- Prepare tables and data
------------------------------------------------------------------
CREATE TABLE t1 (id number, title VARCHAR(20), somenumCol number);

CREATE TABLE t2 (id number, t1Id number, someData VARCHAR(20));


INSERT INTO t1
SELECT 1, 'title 1', 5 from dual UNION ALL
SELECT 2, 'title 2', 5 from dual UNION ALL
SELECT 3, 'title 3', 5 from dual UNION ALL
SELECT 4, 'title 4', 5 from dual UNION ALL
SELECT null, 'title 5', 5 from dual UNION ALL
SELECT null, 'title 6', 5 from dual;

INSERT INTO t2
SELECT 1, 1, 'data 1' from dual UNION ALL
SELECT 2, 1, 'data 2' from dual UNION ALL
SELECT 3, 2, 'data 3' from dual UNION ALL
SELECT 4, 3, 'data 4' from dual UNION ALL
SELECT 5, 3, 'data 5' from dual UNION ALL
SELECT 6, 3, 'data 6' from dual UNION ALL
SELECT 7, 4, 'data 7' from dual UNION ALL
SELECT 8, null, 'data 8' from dual UNION ALL
SELECT 9, 6, 'data 9' from dual UNION ALL
SELECT 10, 6, 'data 10' from dual UNION ALL
SELECT 11, 8, 'data 11' from dual ;

------------------------------------------------------------------
-- we want to get all data in t1 that has a child row in t2
------------------------------------------------------------------

-- join gives us more rows than we need, because it joins to every child row

SELECT    t1.*
FROM    t1
        JOIN t2 ON t1.id = t2.t1Id  ;


1    title 1    5
1    title 1    5
2    title 2    5
3    title 3    5
3    title 3    5
3    title 3    5
4    title 4    5  

 
-- distinct would solve that but it's not pretty nor efficient
SELECT    DISTINCT t1.*
FROM    t1
        JOIN t2 ON t1.id = t2.t1Id  ;


4    title 4    5
2    title 2    5
3    title 3    5
1    title 1    5


-- now this is a weird part where somenumCol is a column in t1
-- but the parser doesn't seem to mind that

 
SELECT    t1.*
FROM    t1
WHERE    t1.id IN (SELECT somenumCol FROM t2) ;


NO Rows Returned

-- here in and exists both get correct results
SELECT    t1.*
FROM    t1
WHERE    t1.id IN (SELECT t1id FROM t2);


1    title 1    5
2    title 2    5
3    title 3    5
4    title 4    5


SELECT    t1.*
FROM    t1
WHERE    exists (SELECT * FROM t2 WHERE t1.id = t2.t1id) ;

1    title 1    5
2    title 2    5
3    title 3    5
4    title 4    5
------------------------------------------------------------------
-- we want to get all data in t1 that doesn't have a child row in t2
------------------------------------------------------------------

-- join gives us the correct result

SELECT    t1.*
FROM    t1
        LEFT JOIN t2 ON t1.id = t2.t1Id
WHERE    t2.id IS NULL  ;


    title 6    5
    title 5    5


-- IN doesn't get correct results.
-- That's because of how IN treats NULLs and the Three-valued logic
-- NULL is treated as an unknown, so if there's a null in the t2.t1id
-- NOT IN will return either NOT TRUE or NOT UNKNOWN. And neither can be TRUE.
-- when there's a NULL in the t1id column of the t2 table the NOT IN query will always return an empty set. 


SELECT    t1.*
FROM    t1
WHERE    t1.id NOT IN (SELECT t1id FROM t2)  ;


NO Rows Returned 
 
-- NOT EXISTS gets correct results

SELECT    t1.*
FROM    t1
WHERE    NOT EXISTS (SELECT * FROM t2 WHERE t1.id = t2.t1id);


    title 6    5
    title 5    5

We can see that it's best to use EXISTS because it always behaves as the user would think it does.





Tuesday 18 February 2014

solution :- constraint (SYS.I_SOURCE1) violated


I am trying to drop an object from oracle using sqlplus then meanwhile i am cancelling the operation .
After i am drop the same object then  raise the error  that object does not exist.


And after i am creating the same object with some modification  then getting the error
constraint (SYS.I_SOURCE1)  violated  .

solution

find any object are not in user and existing in SYS.SOURCE$

select * from SYS.SOURCE$ where SOURCE  like '%OBJECTNAME%';

select OBJECT_NAME,OBJECT_ID from all_objects where owner='USER' and object_type='TYPE';

once you confirm which object are missing in user and existing in sys the you can delete the entries from sys.



    error : unique constraint (SYS.I_SOURCE1) violated

  delete from SYS.SOURCE$  where OBJ#=74615 and line=1;
    
     error : unique constraint (SYS.I_IDL_SB41) violated

    delete from  SYS.IDL_SB4$  where OBJ#=74615;
   
    error : unique constraint (SYS.I_IDL_UB11) violated
   
    delete from  SYS.IDL_UB1$  where OBJ#=74615;

    error : unique constraint (SYS.I_IDL_CHAR1) violated

    delete from  SYS.IDL_CHAR$ where OBJ#=74615;

    error : unique constraint (SYS.I_IDL_UB21) violated

   delete from   SYS.IDL_UB2$  where OBJ#=74615;





ORA-48223: Interrupt Requested - Fetch Aborted - Return Code [51111] [HM_INFO]

ORA-48223: Interrupt Requested - Fetch Aborted - Return Code [51111] [HM_INFO]

ORA-51108: unable to access diagnostic repository - retry command
ORA-48223: Interrupt Requested - Fetch Aborted - Return Code [51111] [HM_INFO]
ORA-06512: at "SYS.DBMS_IR", line 522

ORA-12012: error on auto execute of job "SYS"."DRA_REEVALUATE_OPEN_FAILURES"

ORA-51108: unable to access diagnostic repository - retry command




If you face this type of errors then you can proceed the following steps

SQL> exec dbms_hm.drop_schema;

PL/SQL procedure successfully completed.

SQL> exec dbms_hm.create_schema;

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

Tuesday 21 January 2014

Purging trace and dump files with 11g ADRCI

Purging trace and dump files with 11g ADRCI


In previous versions of Oracle prior to 11g, we had to use our own housekeeping scripts to purge the udump, cdump and bdump directories.

In Oracle 11g, we now have the ADR (Automatic Diagnostic Repository) which is defined by the diagnostic_dest parameter.

So how are unwanted trace and core dump files cleaned out in 11g automatically?

This is done by the MMON background process.

There are two time attributes which are used to manage the retention of information in ADR. Both attributes correspond to a number of hours after which the MMON background process purges the expired ADR data.

LONGP_POLICY (long term) defaults to 365 days and relates to things like Incidents and Health Monitor warnings.

SHORTP_POLICY (short term) defaults to 30 days and relates to things like trace and core dump files

The ADRCI command show control will show us what the current purge settings are as shown below.

adrci> show control

ADR Home = /u01/app/oracle/diag/rdbms/orcl11/orcl11:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1095473802           720                  8760                 2010-07-07 08:46:56.405618 +08:00        2010-08-22 22:14:11.443356 +08:00                                                 1                    2                    76                   1                    2010-07-07 08:46:56.405618 +08:00

In this case it is set to the defaults of 720 hours (30 days) for the Short Term and 8760 hours (One year) for the long term category.

We can change this by using the ADRCI command ‘set control’

In this example we are changing the retention to 15 days for the Short Term policy attribute (note it is defined in Hours)

adrci> set control (SHORTP_POLICY =360)

adrci> show control

ADR Home = /u01/app/oracle/diag/rdbms/orcl11/orcl11:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1095473802           360                  8760                 2010-08-27 09:36:09.385370 +08:00        2010-08-22 22:14:11.443356 +08:00                                                 1                    2                    76                   1                    2010-07-07 08:46:56.405618 +08:00

We can also manually purge information from the ADR using the ‘purge’ command from ADRCI (note this is defined in minutes and not hours!).

In this example we are purging all trace files older than 6 days. We see that the LAST_MANUPRG_TIME column is now populated.

adrci> purge -age 8640 -type TRACE 

adrci> show control

ADR Home = /u01/app/oracle/diag/rdbms/orcl11/orcl11:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1095473802           360                  8760                 2010-08-27 09:36:09.385370 +08:00        2010-08-22 22:14:11.443356 +08:00        2010-08-27 09:50:07.399853 +08:00        1                    2                    76                   1                    2010-07-07 08:46:56.405618 +08:00


collect various diagnostic related files, such as the Database alert log, trace files using ADRCI

Oracle's Automatic Diagnostic Repository (ADR) is a location on the file system used to collect various diagnostic related files, such as the Database alert log, trace files, and the new 11g HealthMonitor report files. The ADR location is not limited just to the Oracle Database it can be used as the diagnostic repository for other Oracle products. The command line tool used to 'manage' the diagnostic data is ADRCI.

Two terms that help 'understand' the adrci tool are Problem and Incident.

Problem – The critical error.

Incident – A single occurrence of the error. Diagnostic data (eg trace files etc) are collected and related to an incident.


To Start the ADRCI command line tool:
adrci

To end an adrci session use either exit or quit:
adrci>exit
adrci>quit

To see the list of available commands:
adrci>help
adrci>help <command name>
eg
adrci>help set home


Setting the Home
The structure of the ADR allows it to be the repository for many database instances, each instance having its own sub-directories within ADR base directory. The default behaviour for the adrci tool to 'act' on all the 'homes' within the ADR. You can also 'set' the home to limit the source of diagnostic data to manage.

View the current homes
adrci> show homes;

adrci> show homes;
ADR Homes:
diag/rdbms/orcl2/orcl2
diag/rdbms/orcl/orcl
diag/tnslsnr/oln/listener


Notice the multiple homes(orcl2, orcl ....) listed within the ADR, and that the path information is display as 'relative' paths.  You can use the relative path when setting the adrci home.

- To set a single home path:
adrci>set homepath diag/rdbms/orcl2/orcl2

or in a shorten form dropping the diag/rdbms

adrci>set homepath orcl2

The adric>show homes command indicates the current home is orcl2;

adrci> show homes
ADR Homes:
diag/rdbms/orcl2/orcl2



Viewing the the Alert Log.
Use the adrci>show alert command to view the contents of the alert log.

adrci>show alert
or
adrci>show alert -tail
or
adrci>show alert -tail -f


Viewing Problems.
The adrci>show problem command will by default list the last 50 problems. There are various filter and sort options to control the output. The basic adrci>show problem syntax is

adrci>show problem [-p "predicate_string"]

Where the "predicate_string" is 1 of many predicate values. Use the
adrci>help show problem for the full list. Below are some examples.

adrci>show problem
or
adrci> show problem -p "problem_id=2"
or
adrci>show problem -orderby lastinc_time
or
adrci>show problem -p "problem_id > 500"


Here is sample of the adrci>show problem command

adrci> show problem -p "problem_id=2"
ADR Home = /u01/app/diag/rdbms/orcl/orcl:
*************************************************************************
PROBLEM_ID      PROBLEM_KEY                  LAST_INCIDENT    
--------------- ---------------------------  --------------------
2                ORA 600 [kebm_mmon_main_1]  7489  
1 rows fetched


Viewing Incidents.
To view the incidents of the current ADR home use the adrci>show incident command. Two arguments that control the output are the -p “predicate string” and the -mode {BASIC | BRIEF | DETAIL} indicator. For example:

adrci>show incident
or
adrci>show incident -p “incident_id=7489”
or
adrci>show incident -p “incident_id=7489” -mode detail

The show incident -p “incident_id=7489” -mode detail will include in the output the location of the related trace files for the selected incident.


Viewing Trace Files
To display a list of trace files within the currently set Home use adrci> show tracefile. Two useful methods to control the tracefile output are -i <incident_id> and the use of % wildcard. Use the -i <incident_id> to list tracefiles related to the incident.  Use the %wildcard to limit the tracefiles to those that match a particular filename. For example:

adrci> show tracefile
or
adrci> show tracefile %dw00%
or
adrci> show tracefile -i 7489


Packing it (For Oracle Support or for yourself)
There are multiple ways to achieve this, here I am creating an empty 'package' and adding incident data to it.  The steps are
1. Create a package (A logical container for the diagnostic data)
2. Add diagnostic data into the package (from an incident or manually by adding trace files)
3. Generate a 'physical' package (file on the filesystem)
4. Finalize the package


Step 1.
adrci> ips create package
Created package 3 without any contents, correlation level typical

Step 2.
adrci>ips add incident 32578 package 3;
Added incident 32578 to package 3

To manually add a tracefile into the package
adrci ips add file <path to tracefile> package 3

Step 3.
adrci>ips generate package 3 in /u01/support
Generated package 3 in file /u01/support/IPSPKG_<datestamp>_COM_1.zip, mode complete

Step 4.
adrci> ips finalize package 3


When reviewing a database incident, even if you may not forward it on to Oracle Support it is helpful to generate the package zip file then unzipping if for your own review.


Oracle Docs: http://docs.oracle.com/cd/B28359_01/server.111/b28319/adrci.htm

Friday 10 January 2014

Optimizer statistics do not get automatically purged (object lock of WRI$_OPTSTAT_HISTHEAD_HISTORY) /// SYSAUX and SYSTEM tablespaces have been continually growing



object lock of WRI$_OPTSTAT_HISTHEAD_HISTORY



The old statistics are purged automatically at regular intervals based on the statistics history retention setting and the time of recent statistics gathering performed in the system. Retention is configurable using the ALTER_STATS_HISTORY_RETENTION procedure. The default value is 31 days.

The SYSAUX and SYSTEM tablespaces have been continually growing due to

the databases where a high amount of import/exports and RMAN are taking place.

SELECT  occupant_name "Item",
    space_usage_kbytes/1048576 "Space Used (GB)",
    schema_name "Schema",
    move_procedure "Move Procedure"
FROM v$sysaux_occupants
ORDER BY 1

/

Item                      Space Used (GB) Schema                    Move Procedure

SM/OPTSTAT 12.45  SYS


To resolve this I set the stats retention period to 5 days.

SQL> exec dbms_stats.alter_stats_history_retention(5);


To find out the oldest available stats you can issue the following:

SQL> select dbms_stats.get_stats_history_availability from dual;
GET_STATS_HISTORY_AVAILABILITY
---------------------------------------------------------------------------
28-SEP-13 00.00.00.000000000 +04:00

To find out a list of how many stats are gathered for each day between the retention the current date and the oldest stats history issue the following:

SQL> select trunc(SAVTIME),count(1) from WRI$_OPTSTAT_HISTHEAD_HISTORY group by  trunc(SAVTIME) order by 1;
TRUNC(SAV COUNT(1)
--------- ----------
28-SEP-13 2920140
29-SEP-13 843683
30-SEP-13 519834
01-OCT-13 958836
02-OCT-13 3158052
03-OCT-13 287
04-OCT-13 1253952
05-OCT-13 732361
06-OCT-13 507186
07-OCT-13 189416
08-OCT-13 2619
09-OCT-13 1491
10-OCT-13 287
11-OCT-13 126324
12-OCT-13 139556
13-OCT-13 181068
14-OCT-13 4832
15-OCT-13 258027
16-OCT-13 1152
17-OCT-13 287
18-OCT-13 27839
21 rows selected.

What has happened here is that the job run by MMON every 24hrs has checked the retention period and tried to run a purge of all stats older than the retention period. As the job has not compeleted within 5 minutes because of the high number of stats collected on each day, the job has given up and rolled back. Therefore the stats are not being purged.

As each day continues the SYSAUX table is continuing to fill up because the job fails each night and cannot purge old stats.

To resolve this we have to issue a manual purge to clear down the old statistics. This can be UNDO tablespace extensive so it’s best to keep an eye on the amount of UNDO being generated. I suggest starting with the oldest and working fowards.

To manually purge the stats issue the following:

SQL> exec dbms_stats.purge_stats(to_date('28-SEP-13','DD-MON-YY'));PL/SQL procedure successfully completed.

OR

exec DBMS_STATS.PURGE_STATS(SYSDATE-5);

Purge stats older than 5 days 

(best to do this in stages if there is 

a lot of data (sysdate-30,sydate-25 etc)


Then I tried rebuilding the stats indexes and tables as they would now be fragmented.

If you are only running standard edition then you can only rebuild indexes offline. Online index rebuild is a feature of Enterprise Edition.


SELECT
sum(bytes/1024/1024) Mb,
segment_name,
segment_type
FROM
dba_segments
WHERE
tablespace_name = 'SYSAUX'
AND
segment_type in ('INDEX','TABLE')
GROUP BY
segment_name,
segment_type
ORDER BY Mb;
MB  SEGMENT_NAME                             SEGMENT_TYPE
--  ---------------------------------------  ----------------
2   WRH$_SQLTEXT                             TABLE
2   WRH$_ENQUEUE_STAT_PK                     INDEX
2   WRI$_ADV_PARAMETERS                      TABLE
2   WRH$_SEG_STAT_OBJ_PK                     INDEX
3   WRI$_ADV_PARAMETERS_PK                   INDEX
3   WRH$_SQL_PLAN_PK                         INDEX
3   WRH$_SEG_STAT_OBJ                        TABLE
3   WRH$_ENQUEUE_STAT                        TABLE
3   WRH$_SYSMETRIC_SUMMARY_INDEX             INDEX
4   WRH$_SQL_BIND_METADATA_PK                INDEX
4   WRH$_SQL_BIND_METADATA                   TABLE
6   WRH$_SYSMETRIC_SUMMARY                   TABLE
7   WRH$_SQL_PLAN                            TABLE
8   WRI$_OPTSTAT_TAB_HISTORY                 TABLE
8   I_WRI$_OPTSTAT_TAB_ST                    INDEX
9   I_WRI$_OPTSTAT_H_ST                      INDEX
9   I_WRI$_OPTSTAT_TAB_OBJ#_ST               INDEX
12  I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST           INDEX
12  I_WRI$_OPTSTAT_IND_ST                    INDEX
12  WRI$_OPTSTAT_HISTGRM_HISTORY             TABLE
14  I_WRI$_OPTSTAT_IND_OBJ#_ST               INDEX
20  WRI$_OPTSTAT_IND_HISTORY                 TABLE
360 I_WRI$_OPTSTAT_HH_ST                     INDEX
376 WRI$_OPTSTAT_HISTHEAD_HISTORY            TABLE
488 I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST            INDEX

To reduce these tables and indexes you can issue the following:

Note that you cannot enable row movement and shrink the tables as the indexes are function based

alter table WRI$_OPTSTAT_IND_HISTORY enable row movement;
alter table WRI$_OPTSTAT_IND_HISTORY shrink space;
*
ERROR at line 1:
ORA-10631: SHRINK clause should not be specified for this object

select 'alter table '||segment_name||'  move tablespace SYSAUX;' from dba_segments where tablespace_name = 'SYSAUX'
and segment_name like '%OPT%' and segment_type='TABLE'
Run the rebuild table commands – note that this does cause any gather_stats jobs to fail

alter table WRI$_OPTSTAT_TAB_HISTORY  move tablespace sysaux;
alter table WRI$_OPTSTAT_IND_HISTORY  move tablespace sysaux;
alter table WRI$_OPTSTAT_HISTHEAD_HISTORY  move tablespace sysaux;
alter table WRI$_OPTSTAT_HISTGRM_HISTORY  move tablespace sysaux;
alter table WRI$_OPTSTAT_AUX_HISTORY  move tablespace sysaux;
alter table WRI$_OPTSTAT_OPR  move tablespace sysaux;
alter table WRH$_OPTIMIZER_ENV  move tablespace sysaux;

Script to generate rebuild statements

select 'alter index '||segment_name||'  rebuild online parallel (degree 14);' from dba_segments where tablespace_name = 'SYSAUX'
and segment_name like '%OPT%' and segment_type='INDEX'
Once completed it is best to check that the indexes (indices) are usable


select  di.index_name,di.index_type,di.status  from  dba_indexes di , dba_tables dt
where  di.tablespace_name = 'SYSAUX'
and dt.table_name = di.table_name
and di.table_name like '%OPT%'
order by 1 asc

/


Show how big the tables are and rebuild after stats have been purged

select sum(bytes/1024/1024) Mb, segment_name,segment_type from dba_segments
where  tablespace_name = 'SYSAUX'
and segment_name like 'WRI$_OPTSTAT%'
and segment_type='TABLE'
group by segment_name,segment_type order by 1 asc

Show how big the indexes are ready for a rebuild after stats have been purged

select sum(bytes/1024/1024) Mb, segment_name,segment_type from dba_segments
where  tablespace_name = 'SYSAUX'
and segment_name like '%OPT%'
and segment_type='INDEX'

group by segment_name,segment_type order by 1 asc









Wednesday 1 January 2014

[SOLVED] Crontab does not run crontab scripts



[SOLVED] Crontab does not run crontab scripts


[oracle@111 db_back]$ ps aux | grep cron
oracle   14930  0.0  0.0 103232   860 pts/0    S+   23:21   0:00 grep cron
[oracle@111 db_back]$ service crond restart
User has insufficient privilege.
[root@111 ~]# service crond restart
Stopping crond:                                            [FAILED]
Starting crond:                                            [  OK  ]


SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/
# For details see man 4 crontabs

# Example of job definition:
# .---------------- minute (0 - 59)
# |  .------------- hour (0 - 23)
# |  |  .---------- day of month (1 - 31)
# |  |  |  .------- month (1 - 12) OR jan,feb,mar,apr ...
# |  |  |  |  .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# |  |  |  |  |
# *  *  *  *  * user-name command to be executed
* * * * * oracle run-parts /scripts/expback
* * * * * oracle run-parts /scripts/rmanback
15 * * * * oracle run-parts /etc/cron.hourly



First, the correct place for this is probably in /etc/cron.d not in /etc/crontab. If you really want to keep it where it is now,
I'd suggest looking in /var/log/cron and making sure that it is executing at all. I'd look at `aureport -a`
 and see if anything is logged as an selinux denial around the time you expect this to execute
 and then use `ausearch -a nnn` where nnn is the number from the far right hand end of the aureport output line(s).
  Trying it in permissive mode by running `setenforce 0` would be a good test of this.