Wednesday, 30 October 2013

NOCOPY Hint to Improve Performance of OUT and IN OUT Parameters in PL/SQL Code

NOCOPY Hint to Improve Performance of OUT and IN OUT Parameters in PL/SQL Code

Oracle has two methods of passing passing OUT and IN OUT parameters in PL/SQL code:
  • Pass By Value : The default action is to create a temporary buffer (formal parameter), copy the data from the parameter variable (actual parameter) to that buffer and work on the temporary buffer during the lifetime of the procedure. On successful completion of the procedure, the contents of the temporary buffer are copied back into the parameter variable. In the event of an exception occurring, the copy back operation does not happen.
  • Pass By Reference : Using the NOCOPY hint tells the compiler to use pass by reference, so no temporary buffer is needed and no copy forward and copy back operations happen. Instead, any modification to the parameter values are written directly to the parameter variable (actual parameter).

    CONN / AS SYSDBA

    GRANT SELECT ON v_$statname TO Ashokan ;
    GRANT SELECT ON v_$mystat TO Ashokan;
    GRANT CREATE PROCEDURE TO Ashokan;



    CONN Ashokan/Ashokan

    CREATE OR REPLACE PACKAGE Ashokan_nocopy AS

    PROCEDURE in_out_time;
    PROCEDURE in_out_memory;
    PROCEDURE in_out_nocopy_time;
    PROCEDURE in_out_nocopy_memory;

    END;
    /

    CREATE OR REPLACE PACKAGE BODY Ashokan_nocopy AS

    TYPE     t_tab IS TABLE OF VARCHAR2(32767);
    g_tab    t_tab := t_tab();
    g_start  NUMBER;

    FUNCTION get_stat (p_stat IN VARCHAR2) RETURN NUMBER;
    PROCEDURE in_out (p_tab  IN OUT  t_tab);
    PROCEDURE in_out_nocopy (p_tab  IN OUT NOCOPY  t_tab);

    -- Function to return the specified statistics value.
    FUNCTION get_stat (p_stat IN VARCHAR2) RETURN NUMBER AS
      l_return  NUMBER;
    BEGIN
      SELECT ms.value
      INTO   l_return
      FROM   v$mystat ms,
             v$statname sn
      WHERE  ms.statistic# = sn.statistic#
      AND    sn.name = p_stat;
      RETURN l_return;
    END get_stat;


    -- Basic test  procedures.
    PROCEDURE in_out (p_tab  IN OUT  t_tab) IS
      l_count NUMBER;
    BEGIN
      l_count := p_tab.count;
    END in_out;

    PROCEDURE in_out_nocopy (p_tab  IN OUT NOCOPY  t_tab) IS
      l_count NUMBER;
    BEGIN
      l_count := p_tab.count;
    END in_out_nocopy;


    -- Time a single call using IN OUT.
    PROCEDURE in_out_time IS
    BEGIN
       g_start := DBMS_UTILITY.get_time;

       in_out(g_tab);

       DBMS_OUTPUT.put_line('IN OUT Time         : ' ||
                            (DBMS_UTILITY.get_time - g_start) || ' hsecs');
    END in_out_time;


    -- Check the memory used by a single call using IN OUT.
    PROCEDURE in_out_memory IS
    BEGIN
       g_start := get_stat('session pga memory');

       in_out(g_tab);

       DBMS_OUTPUT.put_line('IN OUT Memory       : ' ||
                            (get_stat('session pga memory') - g_start) || ' bytes');
    END in_out_memory;


    -- Time a single call using IN OUT NOCOPY.
    PROCEDURE in_out_nocopy_time IS
    BEGIN
       g_start := DBMS_UTILITY.get_time;

       in_out_nocopy(g_tab);

       DBMS_OUTPUT.put_line('IN OUT NOCOPY Time  : ' ||
                            (DBMS_UTILITY.get_time - g_start) || ' hsecs');
    END in_out_nocopy_time;


    -- Check the memory used by a single call using IN OUT NOCOPY.
    PROCEDURE in_out_nocopy_memory IS
    BEGIN
       g_start := get_stat('session pga memory');

       in_out_nocopy(g_tab);

       DBMS_OUTPUT.put_line('IN OUT NOCOPY Memory: ' ||
                            (get_stat('session pga memory') - g_start) || ' bytes');
    END in_out_nocopy_memory;


    -- Initialization block to populate test collection.
   
    BEGIN
      g_tab.extend;
      g_tab(1) := '1234567890123456789012345678901234567890';
      g_tab.extend(999999, 1);  -- Copy element 1 into 2..1000000
    END;
    /
   
When running the test procedures, it makes sense to reconnect every time to make sure you get a new session with a clean PGA allocation.


CONN Ashokan/Ashokan

    SET SERVEROUTPUT ON
    EXEC Ashokan_nocopy.in_out_time;

    CONN Ashokan/Ashokan

    SET SERVEROUTPUT ON
    EXEC Ashokan_nocopy.in_out_nocopy_time;

    CONN Ashokan/Ashokan

    SET SERVEROUTPUT ON
    EXEC Ashokan_nocopy.in_out_memory;

    CONN Ashokan/Ashokan

    SET SERVEROUTPUT ON
    EXEC Ashokan_nocopy.in_out_nocopy_memory;


When we run these , the output looks something like this.

    Connected.
    IN OUT Time         : 126 hsecs

    PL/SQL procedure successfully completed.

    Connected.
    IN OUT NOCOPY Time  : 0 hsecs

    PL/SQL procedure successfully completed.

    Connected.
    IN OUT Memory       : 99549184 bytes

    PL/SQL procedure successfully completed.

    Connected.
    IN OUT NOCOPY Memory: 0 bytes

    PL/SQL procedure successfully completed.

    SQL>

From this we can make the following conclusions:

    Elapsed Time: When we use an IN OUT parameter to pass this large collection, it takes over a second to perform the memory allocation, copy forward and copy back for a single call. In comparison, the time taken to make the call using the IN OUT NOCOPY parameter is not measurable in hundredths of a second, because there is no management of a temporary buffer. So using pass by reference for large parameters gives us a considerable performance boost.
    Memory Usage: As expected, when passing a large IN OUT parameter by value, the session requires extra memory for the temporary buffer. When the parameter is defined as IN OUT NOCOPY, no extra memory is required as there is no temporary buffer. So using pass by reference for large parameters reduces the memory required by the session.

Issues

There are a number of issues associated with using the NOCOPY hint that you should be aware of before adding it to all your OUT and IN OUT parameters.

    NOCOPY is a hint. There are a number of circumstances where the compiler can ignore the hint, as described here.
    If you are testing the contents of the parameter as a measure of successful completion of a procedure, adding NOCOPY may give unexpected results. For example, suppose I pass the value of NULL and assume if the parameter returns with a NOT NULL value the procedure has worked. This will work without NOCOPY, since the copy back operation will not happen in the event of an exception being raised. If I add NOCOPY, all changes are instantly written to the actual parameter, so exceptions will not prevent a NOT NULL value being returned. This may seem like a problem, but in my opinion if this affects you it is an indication of bad coding practice on your part. Failure should be indicated by raising an exception, or at worst using a status flag, rather than testing for values.
    Parameter Aliasing. If you use a single variable as an actual parameter for multiple OUT and/or IN OUT parameters in a procedure, using a mix of pass by value and pass by reference, you may get unexpected results. This is because the final copy back from the pass by value parameters will wipe out any changes to the pass by reference parameters. This situation can be compounded further if the actual parameter is a global variable that can be referenced directly from within the procedure. Although the manual describes possible issues, once again it is an indication that you are writing terrible code, rather than a limitation of pass by reference. You can read more about parameter aliasing here.

Friday, 25 October 2013

SYS.ANYDATA (Generic Data Type) is an object type that used to store different data types in Table, or create variable that can setted by any data-type.

SYS.ANYDATA (Generic Data Type)  is an object type that  used to store different data types in Table, or create variable that can setted by any data-type.

For example I will create new table below

CREATE TABLE ANYDATA_TABLE
(
   ID           NUMBER NOT NULL,
   ADT_COLUMN   SYS.ANYDATA
);


Now I will try to insert different data types in col2

insert into anydata_table values(1,sys.anydata.convertnumber(1))
/
insert into anydata_table values(2,sys.anydata.convertdate(sysdate))
/
insert into anydata_table values(3,sys.anydata.convertvarchar2('AShokan B'))
/


Note that I use anydata.convertnumber, anydata.convertdate and anydata.convertvarchar2 to specify data type of inserted data

let's now query data from table

SELECT * FROM ANYDATA_TABLE;
The data of ADT_COULMN is unreadable in result set as below picture

So I need to retrieve readable value of  ADT_COLUMN from query
We can do this by below steps
1-Determine data type of inserted data in ADT_COLUMN
2-Create different functions to return equivalent data type for every inserted data in ADT_COLUMN
    I will create these functions in package called ANYDATA_PKG
3-Use ANYDATA built-in member functions to do equivalent of ANYDATA_PKG in #2



1-Determine data type of inserted data in ADT_COLUMN
we can use SYS.ANYDATA.GETTYPENAME(ANYDATA_TABLE.ADT_COLUMN) to return data type of inserted data

SELECT ANYDATA_TABLE.*, SYS.ANYDATA.GETTYPENAME (ADT_COLUMN) TYPE_NAME
  FROM ANYDATA_TABLE;


2-Create different functions to return equivalent data type for every inserted data in ADT_COLUMN
I will create ANYDATA_PKG which contains three functions
a-Get_number to return number value type
b-Get_varchar2 to return varchar2 type
c-Get_Date to return date type

 CREATE OR REPLACE PACKAGE ANYDATA_PKG 
 AS 
   GN$TMP  NUMBER; 
   FUNCTION GET_NUMBER (IN_ANYDATA IN SYS.ANYDATA) 
    RETURN NUMBER; 

   FUNCTION GET_VARCHAR2 (IN_ANYDATA IN SYS.ANYDATA) 
    RETURN VARCHAR2;

   FUNCTION GET_DATE (IN_ANYDATA IN SYS.ANYDATA) 
    RETURN DATE; 
 END; 


 CREATE OR REPLACE PACKAGE BODY ANYDATA_PKG 
 AS 
   FUNCTION GET_NUMBER (IN_ANYDATA IN SYS.ANYDATA) 
    RETURN NUMBER 
   IS 
    LN$NUMBER_VALUE  NUMBER; 
   BEGIN 
    GN$TMP := IN_ANYDATA.GETNUMBER (LN$NUMBER_VALUE); 
    RETURN (LN$NUMBER_VALUE); 
   END; 

   FUNCTION GET_VARCHAR2 (IN_ANYDATA IN SYS.ANYDATA) 
    RETURN VARCHAR2 
   IS 
    LC$VARCHAR2_VALUE  VARCHAR2 (4000); 
   BEGIN 
    GN$TMP := IN_ANYDATA.GETVARCHAR2 (LC$VARCHAR2_VALUE); 
    RETURN (LC$VARCHAR2_VALUE); 
   END; 

   FUNCTION GET_DATE (IN_ANYDATA IN SYS.ANYDATA) 
    RETURN DATE 
   IS 
    LD$DATE_VALUE  DATE; 
   BEGIN 
    GN$TMP := IN_ANYDATA.GETDATE (LD$DATE_VALUE); 
    RETURN (LD$DATE_VALUE); 
   END; 

 END; 


Let's now write query again to retrieve data and add our new functions to get data in readable fashion.
SELECT ANYDATA_TABLE.*,
       SYS.ANYDATA.GETTYPENAME (ADT_COLUMN) TYPE_NAME,
       CASE
          WHEN SYS.ANYDATA.GETTYPENAME (ADT_COLUMN) = 'SYS.NUMBER'
          THEN
             TO_CHAR (ANYDATA_PKG.GET_NUMBER (ADT_COLUMN))
          WHEN SYS.ANYDATA.GETTYPENAME (ADT_COLUMN) = 'SYS.VARCHAR2'
          THEN
             ANYDATA_PKG.GET_VARCHAR2 (ADT_COLUMN)
          WHEN SYS.ANYDATA.GETTYPENAME (ADT_COLUMN) = 'SYS.DATE'
          THEN
             TO_CHAR (ANYDATA_PKG.GET_DATE (ADT_COLUMN), 'DD-MM-RRRR')
       END
          READABLE_VALUE
  FROM ANYDATA_TABLE;

3-Use ANYDATA built-in members functions to do equivalent of ANYDATA_PKG in #2
 ANYDATA have built-in memebrs functions that can do what we do in ANYDATA_PKG
a- ANYDATA.ACESSNUMBER to return number value mapped to ANYDATA_PKG.GET_NUMBER
b-  ANYDATA.ACESSVARCHAR2 to return varchar2 value mapped to  ANYDATA_PKG.GET_VARCHAR2
c-  ANYDATA.ACESSDATE to return Date value mapped to ANYDATA_PKG.GET_DATE

Let's now use previous functions in our query
SELECT ANYDATA_TABLE.*,
       SYS.ANYDATA.GETTYPENAME (ADT_COLUMN) TYPE_NAME,
       CASE
          WHEN SYS.ANYDATA.GETTYPENAME (ADT_COLUMN) = 'SYS.NUMBER'
          THEN
             TO_CHAR (ANYDATA_PKG.GET_NUMBER (ADT_COLUMN))
          WHEN SYS.ANYDATA.GETTYPENAME (ADT_COLUMN) = 'SYS.VARCHAR2'
          THEN
             ANYDATA_PKG.GET_VARCHAR2 (ADT_COLUMN)
          WHEN SYS.ANYDATA.GETTYPENAME (ADT_COLUMN) = 'SYS.DATE'
          THEN
             TO_CHAR (ANYDATA_PKG.GET_DATE (ADT_COLUMN), 'DD-MM-RRRR')
       END
          READABLE_VALUE,
       CASE
          WHEN SYS.ANYDATA.GETTYPENAME (ADT_COLUMN) = 'SYS.NUMBER'
          THEN
             TO_CHAR (SYS.ANYDATA.ACCESSNUMBER (ADT_COLUMN))
          WHEN SYS.ANYDATA.GETTYPENAME (ADT_COLUMN) = 'SYS.VARCHAR2'
          THEN
             SYS.ANYDATA.ACCESSVARCHAR2 (ADT_COLUMN)
          WHEN SYS.ANYDATA.GETTYPENAME (ADT_COLUMN) = 'SYS.DATE'
          THEN
             TO_CHAR (SYS.ANYDATA.ACCESSDATE (ADT_COLUMN), 'DD-MM-RRRR')
       END
          READABLE_VALUE2
  FROM ANYDATA_TABLE;

Conclusion

I illustrated how to use ANYDATA with scalar data types.
You can do your best practice to use ANYDATA with collection and object types.
If you take decision to do this use sys.anydata.convertcollection, sys.anydata.convertobject to insert collection and object types, then create your custom function to get inserted collection or object type in readable fashion.

Thursday, 24 October 2013

Data Pump vs EXP/IMP: Difference or Comparison Between Data Pump Expdp/impdp and Conventional EXP/IMP

Data Pump vs EXP/IMP: Difference or Comparison Between Data Pump Expdp/impdp and Conventional EXP/IMP


Datapump introduced in Oracle 10g whereas conventional exp/imp was used for logical backups in prior versions of oracle 10g. Exp/imp works even in all versions of Oracle.

Conventional exp/imp can utilize the client machine resource for taking the backups but, the datapump works only in server.

XML schema, XML types are supported in expdp/impdp but not in exp/imp

Parallel execution is possible in datapump which is not supported in conventional exp/imp. Using the parallel option the datapump generates multiple dump files simultaneously.

Datapump cannot export the data into sequential medias like tapes.

Datapump has better control than exp/imp on the backup job with START, STOP and RESTART options.

Datapump gives 15 – 50% performance improvement than exp/imp.

Table Extent compression can be done using COMPRESS option whereas in datapump COMPRESSION does the dumpfile compression.

Friday, 11 October 2013

Linux shell scripting: bad interpreter: No such file or directory


Linux shell scripting: bad interpreter: No such file or directory 

This error pops up for a couple of reasons. At the top of the script there will probably be a line that looks like this:

#!/bin/sh


This is telling Linux that this script should be interpreted using the /bin/sh program. So your first step is to verify that program exists. I tend to use:

which sh


This will typically come back with a response like this:

/bin/sh


This is telling us that the path to the sh program is in fact /bin/sh, matching the path specified at the top of the script. Ok, so what gives? Well, it's possible that this script was made on an operating system that has line ending characters different than linux. This could have been on on a Mac or PC, or the file could have been converted when it was packaged. In this case, you get the relatively misleading bad interpreter: No such file or directory message, which is really trying to look for sh, although you don't get any indication of the fact.

So, how to fix? Read on.
There are various ways to fix the problem, but I find one of the simplest being the use of vi which is standard on most unix systems, and in linux comes in the form of the vim package. Load the script up in vim, by typing vi filename

vi is a text based dinosaur in the day of wysiwyg editors, so if you don't know your way around, make sure you follow these steps carefully.

Once the file is loaded type:

:set fileformat=unix


And hit Enter/Return.

You won't notice anything, but the file has already been fixed. Now all you need to do is save and exit.

:wq!


Again Return, and you should be back in your shell. Run the shell script, and if all goes well, it should now execute properly, and without the dreaded bad interpreter: No such file or directory message.
Defined tags for this entry: bad interpreter, Bash, line endings, Linux, vi
Related entries by tags:

Thursday, 3 October 2013

Run statspack snapshot every 30 minutes



For example : Run statspack snapshot every 30 minutes

SQL>
SQL>
DECLARE
    jobno number;
    instno number;
BEGIN
    SELECT instance_number INTO instno FROM v$instance;
    DBMS_JOB.SUBMIT(jobno, 'statspack.snap(i_snap_level => 7);',
    trunc(sysdate,'HH24')+((floor(to_number(to_char(sysdate,'MI'))/30)+1)*30)/(24*60),
    'trunc(sysdate,''HH24'')+((floor(to_number(to_char(sysdate,''MI''))/30)+1)*30)/(24*60)', TRUE, instno);
    COMMIT;
END;
/

Tuesday, 3 September 2013

ORA-00060: deadlock detected while waiting for resource// Performance issue

ORA-00060: deadlock detected while waiting for resource
Deadlocks in Oracle with logically disjoint transactions usually involve unindexed foreign keys:
There are two issues associated with unindexed foreign keys. 
The first is the fact that a table lock will result if you update the parent records primary key (very very unusual) or if you delete the parent record and the child’s foreign key is not indexed.
Locks in Oracle are managed at the row level. Concurrent disjoint transactions should not interfere with one another. Unindexed foreign keys are an exception, since it can result in a complete TABLE LOCK.
I am testing after how many ORA-00060 errors a recursive transaction aborts. This is a transaction 
which calls itself recursively when it fails, and is in a deadlock state with another transaction 
which also calls itself recursively when it fails.
That is, is there a limit when 2 such deadlocked procedures will give up? Is this because of 
recursive calls or deadlocks?

A script that can find foreign keys across all users that could cause blocking locks.
select owner, table_name, constraint_name,
       cname1 || nvl2(cname2,’,'||cname2,null) ||
       nvl2(cname3,’,'||cname3,null) || nvl2(cname4,’,'||cname4,null) ||
       nvl2(cname5,’,'||cname5,null) || nvl2(cname6,’,'||cname6,null) ||
       nvl2(cname7,’,'||cname7,null) || nvl2(cname8,’,'||cname8,null)
              columns
    from ( select b.owner,
                  b.table_name,
                  b.constraint_name,
                  max(decode( position, 1, column_name, null )) cname1,
                  max(decode( position, 2, column_name, null )) cname2,
                  max(decode( position, 3, column_name, null )) cname3,
                  max(decode( position, 4, column_name, null )) cname4,
                  max(decode( position, 5, column_name, null )) cname5,
                  max(decode( position, 6, column_name, null )) cname6,
                  max(decode( position, 7, column_name, null )) cname7,
                  max(decode( position, 8, column_name, null )) cname8,
                  count(*) col_cnt
             from (select substr(table_name,1,30) table_name,
                          substr(constraint_name,1,30) constraint_name,
                          substr(column_name,1,30) column_name,
                          position,
                          owner
                     from all_cons_columns ) a,
                  all_constraints b
            where a.constraint_name = b.constraint_name
              and b.owner = a.owner
              and b.constraint_type = ‘R’
              and b.status = ‘ENABLED’
            group by b.owner, b.table_name, b.constraint_name
         ) cons
   where col_cnt > ALL
           ( select count(*)
               from all_ind_columns i
              where i.table_owner = cons.owner
                and i.table_name = cons.table_name
                and i.column_name in (cname1, cname2, cname3, cname4,
                                      cname5, cname6, cname7, cname8 )
                and i.column_position <= cons.col_cnt
              group by i.index_name
           )
   order by 1, 2, 3;

If you will get  any output then you can fix your dead lock issue.
And also you can resolve  Performance issue

Friday, 23 August 2013

Oracle 11G : Audit Commit Delay exceeded

Aud: Audit Commit Delay Exceeded, Written A Copy To Os Audit Trail 




You see the following messages appear in your alert.log:
AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail



You have applied the Audit Cleanup Patch or any superceding patch as referenced from note 731908.1.


This is a change that was introduced within the audit functionality to support Audit Vault, these messages can appear in your alert.log occasionally even if this database is not a source of Audit Vault, the reason is as follows:

The database will guarantee that the transaction writing the audit record will commit within a pre-defined maximum allowed interval which is called the Audit Commit Delay interval. If the transaction takes more than Audit Commit Delay to commit the audit record, the Database will write the same record to the OS audit trail. This is a fallback mechanism to make sure there's always written evidence of an audited action within the defined timeframe, a such it is a feature to enhance audit security. The commit delay is fixed at 5 seconds and cannot be tuned.

 The problem is happening because the audit functionality was not able to commit an audit record within 5 seconds, this means at the time the message was written to the alert.log your database was under stress. The cause of the problem is not the auditing layer and the messages seen in the alert.log are only showing that the auditing is suffering because of the generic performance problems of the environment which might affect other components as well.

These messages are purely informational and no direct action can or should be taken to avoid them. This is most likely because of a resource problem on your database. If this is seen incidental you can ignore it but if these messages are seen regularly you will likely have a resource problem and also seeing other symptoms of that, you should analyze and solve the generic performance problem first and then these messages will also go away.

the fix to unpublished bug 8642202 changes the behaviour as follows:

Audit Commit Delay increased to 15 seconds and enforced only when AUD$ is initialized for cleanup.


Bug 8642202 is fixed in patchset 10.2.0.5, PSU 11.2.0.1.1 and patchset 11.2.0.2 and future releases.

Merge patches that include this fix:

11.1.0.7: 
Patch 9821987