Sunday, 17 July 2016

High CPU utilization Scheduled ORA$AT_SA_SPC_SY Jobs 12.1.0.2

Oracle has raised an alert in the alert.log and created a trace file as well, for a failed DBMS_SCHEDULER job with a strange name which doesn’t appear in DBA_SCHEDULER_JOBS or DBA_SCHEDULER_PROGRAMS – what’s going on?
An extract from the alert log and/or the trace file mentioned in the alert log shows something like
*** SERVICE NAME:(SYS.USERS) ...
*** MODULE NAME:(DBMS_SCHEDULER) ...
*** ACTION NAME:(ORA$AT_SA_SPC_SY_nnn) ...
Where ‘nnn’ in the action name is a number.
No matter how hard you scan the DBA_SCHEDULER_% views, you will not find anything with this name. What is actually failing?
Oracle 11.1.0.6 onwards stopped listing these internal jobs inDBA_SCHEDULER_JOBS, as they did in 10g, and instead lists them inDBA_AUTOTASK_% views. However, not by actual name, so don’t go looking for a TASK_NAME that matches the above action name. You will fail.
There are three different autotask types:
  • Space advisor
  • Optimiser stats collection
  • SQl tuning advisor
The tasks that run for these autotask ‘clients’ are named as follows:
  • ORA$AT_SA_SPC_SY_nnn for Space advisor tasks
  • ORA$AT_OS_OPT_SY_nnn for Optimiser stats collection tasks
  • ORA$AT_SQ_SQL_SW_nnn for Space advisor tasks
See MOS notes 756734.1, 755838.1, 466920.1 and Bug 12343947 for details. The first of these has the most relevant and useful information.
UPDATE: My original failing autotask has been diagnosed by Oracle Support as bug 13840704 for which a patch exists herefor 11.2.0.2 and 11.2.0.3.
Oracle document id 13840704.8 has details, but it involves LOBs based on a user defined type. In this case, Spatial data in an MDSYS.SDO_GEOMETRY column.
The view DBA_AUTOTASK_CLIENT won’t show you anything about a specific task, with the above names, but will show you details of what the overall ‘client’ is, There are three:
select client_name, status
from dba_autotask_client;

CLIENT_NAME                     STATUS
------------------------------- --------
auto optimizer stats collection ENABLED
auto space advisor              ENABLED
sql tuning advisor              DISABLED
I can see from the task name in the alert log and trace file, that my failing task is a space advisor one, so, by looking into theDBA_AUTOTASK_JOB_HISTORY view, I can see what’s been happening:
select distinct client_name, window_name, job_status, job_info
from dba_autotask_job_history
where job_status <> 'SUCCEEDED'
order by 1,2;

CLIENT_NAME        WINDOW_NAME     JOB_STATUS JOB_INFO
------------------ --------------- ---------- -------------------------------------------
auto space advisor SATURDAY WINDOW FAILED     ORA-6502: PL/SQL: numeric or value error...
auto space advisor SUNDAY WINDOW   FAILED     ORA-6502: PL/SQL: numeric or value error...
So, in my own example, the auto space advisor appears to have failed on Saturday and Sunday. Given that this is an internal task, and nothing I can do will let me know about the invalid number problem, I need to log an SR with Oracle on the matter. However, as I don’t want my fellow DBAs to be paged in the wee small hours for a known problem, I have disabled the space advisor task as follows:
BEGIN
  dbms_auto_task_admin.disable(
    client_name => 'auto space advisor',
    operation   => NULL,
    window_name => NULL);
END;
/

PL/SQL procedure successfully completed
Checking DBA_AUTOTASK_CLIENT again, shows that it is indeed disabled:
select client_name, status
from dba_autotask_client
where client_name = 'auto space advisor';

CLIENT_NAME                     STATUS
------------------------------- --------
auto space advisor              DISABLED
Enabling it again after Oracle Support have helped resolve the problem is as simple as calling dbms_auto_task_admin.enable with exactly the same parameters as for the disable call:
BEGIN
  dbms_auto_task_admin.enable(
    client_name => 'auto space advisor',
    operation   => NULL,
    window_name => NULL);
END;
/

PL/SQL procedure successfully completed
When enabling and/or disabling auto tasks, you must use theCLIENT_NAME as found in DBA_AUTOTASK_CLIENT view.
The full list of DBA_AUTOTASK_% views is:
  • DBA_AUTOTASK_CLIENT
  • DBA_AUTOTASK_CLIENT_HISTORY
  • DBA_AUTOTASK_CLIENT_JOB
  • DBA_AUTOTASK_JOB_HISTORY
  • DBA_AUTOTASK_OPERATION
  • DBA_AUTOTASK_SCHEDULE
  • DBA_AUTOTASK_TASK
  • DBA_AUTOTASK_WINDOW_CLIENTS
  • DBA_AUTOTASK_WINDOW_HISTORY

Reducing time on txkWfClone.sh while running 12.2.4 Autoconfig in appsTier

Preface

Running Autoconfig on multiple tiers can take a long time sometimes and many a times, the culprit is txkWfClone.sh 

The real deal

During cloning, execution of txkWfClone  is required only for the first autoconfig run. For subsequent Autoconfig  runs,  it is not required. 

How can you be smart?

There are two smart things you can do to help yourself and make this happen.

1). Move the phase of the script txkWfClone.sh to INSTALL_SETUP phase in $FND_TOP/admin/driver/fndtmpl.drv:

Open the file $FND_TOP/admin/driver/fndtmpl.drv and make the following changes:

Before:
fnd admin/template txkWfClone.sh    INSTE8_SETUP     
<s_com>/admin/install/<s_contextname> txkWfClone.sh 700

After:
fnd admin/template txkWfClone.sh    INSTALL_SETUP     
<s_com>/admin/install/<s_contextname> txkWfClone.sh 700

2). Replace %s_webhost% with %s_webentryhost% and %s_domainname% with %s_webentrydomain%, in the template $FND_TOP/admin/template/txkWfClone.sql 

How does the savings add up..

Changing the autoconfig phase of this script from  INST8_SETUP to INSTALL_SETUP will prevent it from unwanted execution, resulting in better performance for future autoconfig runs. 

The role of txkWfCLone.sh is to create workflow related references in  db tier of target system in the cloning process. 

During txkWfClone.sh's execution, the comparison between source system and target system URLs is done. Essentially, in a scenario where we are creating an additional middle tier from an existing  middle tier/(s) having one common database tier, it will result in  redundant operation of changing/creating workflow references for target  middle tier. 

For comparison, txkWfClone constructs an url for target system using  the context variables as follows: 

'%s_webentryurlprotocol%'||'://'||'%s_webhost%'||'.'||'%s_domainname%'||':'||' 
%s_active_webport%'||'/pls/'||'%s_dbSid%'
Because the url for target system is constructed using s_webhost  context variable, the script creates distincts urls for each middle tier  which is not necessary because they use the same database instance and hence the workflow references are the same. 

This results into a good amount  of time consumption for every  autoconfig run while cloning. 

This can be avoided by changing the use of %s_webhost% with the use of %s_webentryhost%  in the template txkWfClone.sql. 

This way the urls created for each middle  tier would be the same as it should be in the case of a system with multi apps tiers and one db instance. 

Conclusion

It is good to save time during cloning and txkWfClone.sh can be easily fixed to avoid duplicate effort. Simple fixes like this should be absorbed into regular cloning practices for saving time and effort

Steps to reorganize Workflow tables and their corresponding indexes

1. The concurrent request “Purge Obsolete Workflow Runtime Data” has been completed successfully of request id # 3416008
2. Stopped the applications services in both the nodes
3. The following tables were reorganized in their tablespaces APPS_TS_TX_DATA
Command to move the tables:
select ‘alter table ‘ owner ‘.’ table_name ‘ move tablespace ‘ tablespace_name ‘;’ from dba_tables where table_name in (‘WF_ITEM_ACTIVITY_STATUSES’,
‘WF_ITEM_ATTRIBUTE_VALUES’,’WF_NOTIFICATION_ATTRIBUTES’)
a. WF_ITEM_ATTRIBUTE_VALUES
b. WF_ITEM_ACTIVITY_STATUSES
c. WF_NOTIFICATION_ATTRIBUTES
4. The following corresponding indexes were rebuilded in their tablespace APPS_TS_TX_IDX,
a .WF_ITEM_ACTIVITY_STATUSES_N1
b. WF_ITEM_ACTIVITY_STATUSES_N3
c. WF_ITEM_ACTIVITY_STATUSES_PK
d. WF_ITEM_ACTIVITY_STATUSES_N2
e. WF_ITEM_ACTIVITY_STATUSES_N4
f. WF_ITEM_ATTRIBUTE_VALUES_PK
g. WF_NOTIFICATIONS_ATTR_PK
Command to rebuild the indexes in the reorganized tables:
select ‘alter index ‘ owner ‘.’ index_name ‘ rebuild tablespace ‘ tablespace_name ‘;’ from dba_indexes where index_type=’NORMAL’ and table_name in (‘WF_ITEM_ACTIVITY_STATUSES’,
‘WF_ITEM_ATTRIBUTE_VALUES’,’WF_NOTIFICATION_ATTRIBUTES’)
5. Checked the status of the indexes using the following command,
select status from dba_indexes where index_type=’NORMAL’ and table_name in (‘WF_ITEM_ACTIVITY_STATUSES’,
‘WF_ITEM_ATTRIBUTE_VALUES’,’WF_NOTIFICATION_ATTRIBUTES’)
6. Startup the application services in both nodes
7. The concurrent program “Gather Table Statistics” to analyze the tables has been ran


8. Monitoring the performance.

SMTP Telnet Test To Troubleshoot Java Mailer For Sending Email Notifications

How to verify SMTP server from E-business suite application server.



SOLUTION

It is very important to perform all the steps mentioned below; they will allow to perform the telnet SMTP test from the right node and to pass the correct values during the test.

1) Identify the concurrent tiers node where mailer runs

by running script below:

select target_node
from fnd_concurrent_queues where concurrent_queue_name like 'WFMLRSVC%';
It will return for example:

TARGET_NODE                    
------------------------------
ebiz1                    

In this example ebiz1 is the node where java mailer runs.

2) Gather other parameters values necessary for the SMTP telnet test:

To perform the SMTP telnet test, in addition to mailer node, you will also need to know on which node is the SMTP server (this is mailer "outbound server" parameter), and what is the reply to address that is set up for the java mailer (this is mailer "reply to" parameter).

To get these values run the following:

SELECT b.component_name,
       c.parameter_name,
       a.parameter_value
FROM fnd_svc_comp_param_vals a,
     fnd_svc_components b,
     fnd_svc_comp_params_b c
WHERE b.component_id = a.component_id
     AND b.component_type = c.component_type
     AND c.parameter_id = a.parameter_id
     AND c.encrypted_flag = 'N'
     AND b.component_name like '%Mailer%'
     AND c.parameter_name in ('OUTBOUND_SERVER', 'REPLYTO')
ORDER BY c.parameter_name;

It will return for example:

COMPONENT_NAME                  PARAMETER_NAME                 PARAMETER_VALUE        
------------------------------- ------------------------------ -----------------------
Workflow Notification Mailer    OUTBOUND_SERVER                mitini1                          
Workflow Notification Mailer    REPLYTO                        jmailer1@dummy_domain.com

In this example the outbound server is on mitini1 node and the reply to address is set to jmailer1@dummy_domain.com.



3) Perform the SMTP telnet test as follows:

3.1) Log on to the node where mailer runs (to identify it, please refer to step 1)

This is mandatory. SMTP telnet test is only meaningful when it is performed from the concurrent tier where mailer runs.

In our example you should log to ebiz1 node.

3.2) From mailer node, issue the following commands one by one:

telnet [outbound server] 25
EHLO [mailer node]
MAIL FROM: [reply_to address]
RCPT TO: [my_test_email_address]
DATA
Subject: Test message

Test message body
.
quit 

Uncaught Throwable in processSockets

EBS Version=12.2.4
DB Version=12.1.0.2

Intermittent Failure of OACore Services

Found oacore manager server errors after upgrading from 12.1.3 to 12.2.4

Error:

<BEA-000421> <Uncaught Throwable in processSockets
java.io.IOException: DP_POLL failed for /dev/poll - [Operation not applicable (89)].
java.io.IOException: DP_POLL failed for /dev/poll - [Operation not applicable (89)]
        at weblogic.socket.DevPollSocketMuxer.doPoll(Native Method)
        at weblogic.socket.DevPollSocketMuxer.processSockets(DevPollSocketMuxer.java:95)
        at weblogic.socket.SocketReaderRequest.run(SocketReaderRequest.java:29)
        at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:42)
        at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:145)

Solution:
To avoid this issue, /dev/poll monitoring of polled file descriptors needs to be disabled. Please review sub-section "Customizing Managed Server Configuration via WebLogic Server Administration Console" in My Oracle Support Knowledge Document Document 1905593.1, "Managing Configuration of Oracle HTTP Server and Web Application Services in Oracle E-Business Suite Release 12.2" to disable /dev/poll monitoring and make sure the following actions are performed:

  • Choose 'oacore' as the managed server which needs the configuration change

  • Add the new server start-up argument:
    -Dweblogic.DevPollDisabled=true
  • Save and Activate the configuration change

Sunday, 17 April 2016

Oracle networking issues

This simple two part procedure will help to diagnose and fix the most common sqlnet and tnsnames configuration problems.

  • 1. Test communication between the client and the listener

    We will use tnsping to complete this step. It's a common misconception that tnsping tests connectivity to the instance. In actual fact, it only tests connectivity to the listener.

    Here, we will use it to prove that a) the tnsnames.ora has the correct hostname and port, and b) that there is a listener listening on the specified host and port. Run tnsping:

    tnsping <your_tns_entry_name>
    If it is successful you will see something like this:

    [oradev@testerp] $ tnspinng dev
    
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS =
    (PROTOCOL = TCP) (HOST = dev)(PORT = 1521))) (CONNECT_DATA = 
    (SERVER = DEDICATED) (SERVICE_NAME = scr9)))
    OK (40 msec)

    If not, here are some common errors, and some suggestions for fixing them:

    TNS-03505: Failed to resolve name
    The specified database name was not found in the tnsnames.ora, onames or ldap. This means that tnsping hasn't even got as far as trying to make contact with a server - it simply can't find any record of the database that you are trying to tnsping. Make sure that you've spelled the database name correctly, and that it has an entry in the tnsnames.ora.

    If you have a sqlnet.ora, look at for the setting NAMES.DEFAULT_DOMAIN. If it is set, then all entries in your tnsnames.ora must have a matching domain suffix.

    TNS-12545: Connect failed because target host or object does not exist
    The host specified in the tnsnames is not contactable. Verify that you have spelled the host name correctly. If you have, try pinging the host with 'ping <hostname>'. If ping returns 'unknown host', speak to your network admin. It might be that you have a DNS issue (you could try using the IP address if you have it to hand). If you get 'host unreachable', again speak to your network person, the problem could be down to a routing or firewall issue.

    TNS-12541: TNS:no listener
    The hostname was valid but the listener was not contactable. Things to check are that the tnsnames has the correct port (and hostname) specified, and that the listener is running on the server and using the correct port.

    tnsping hangs for a long time
    I've seen this happen in situations where there is something listening on the host/port - but it isn't an oracle listener. Make sure you have specified the correct port, and that your listener is running. If all looks ok, try doing a 'netstat -ap | grep 1521' (or whatever port you are using) to find out what program is listening on that port.
  • 2. Attempt a connection to the instance

    Once you have proven that the tnsnames is talking to the listener properly, the next step is to attempt a full connection to the instance. To do this we.ll use sqlplus:

    sqlplus [username]/[password]@<your_tns_entry_name>
    If it works you will successfully log into the instance. If not, here are some common errors:

    ORA-01017: invalid username/password; logon denied
    This is actually a good error in these circumstances! Even though you didn't use the correct username or password, you must have successfully made contact with the instance.

    ORA-12505: TNS:listener does not currently know of SID given in connect
    Either the SID is misspelled in the tnsnames, or the listener isn't listening for it. Check the tnsnames.ora first. If it looks ok, do a 'lsnrctl status' on your server, to see what databases the listener is listening for.

    ORA-12514: TNS:listener could not resolve SERVICE_NAME given in connect
    This is quite a common error and it means that, while the listener was contactable, the database (or rather the service) specified in the tnsnames wasn't one of the things that it was listening out for.

    Begin by looking at your tnsnames.ora. In it, you will a setting like SERVICE_NAME=<name>.

    If you are running a single instance database (ie. not RAC), and you are sure that you are not using services, it might be easier to change SERVICE_NAME= to SID= in your tnsnames. Using service names is the more modern way of doing things, and it does have benefits, but SID still works perfectly well (for now anyway).

    If you would prefer to continue using service names, you must first check that you have not misspelled the service name in your tnsnames. If it looks alright, next check that the listener is listening for the service. Do this by running 'lsnrctl services' on your server. If there isn't an entry for your service, you need to make sure that the service_names parameter is set correctly on the database.

How To recreate the OraInventory if it gets corrupted or removed?

In cases where the OraInventory is missing or otherwise corrupt, recreate the oraInventory directory on UNIX systems, using the following steps. In a normal installation, there is a Global Inventory (OraInventory) and a Local Inventory($ORACLE_HOME/inventory). 
  1. Locate the oraInst.loc file, which may be in different locations, depending on your system:

          /var/opt/oracle/oraInst.loc file
          or
          /etc/oraInst.loc
  2. Modify the file oraInst.loc file:

          cp /var/opt/oracle/oraInst.loc /var/opt/oracle/oraInst.loc.bak
          mkdir /u01/oracle/oraInventory

          ---file contents---
          inventory_loc=/u01/oracle/oraInventory
          inst_group=oinstall
          ---file contents---
    Important:
    Theses example uses a typical directory, considered an $ORACLE_BASE, and a typical UNIX group which installed the Oracle products. Ensure that the correct values are used for your system.

    The oraInventory directory is usually a directory under the $ORACLE_HOME. For example, if the $ORACLE_HOME  is equal to "/u01/oracle/product/10g", then the OraInventory could be "/u01/oracle/OraInventory".
  3. Change the permissions to be appropriate, (using your directory location):

          chmod 644 /var/opt/oracle/oraInst.loc
  4. For consistency, copy the file to Oracle home directory, (using your directory location):

    cp $ORACLE_HOME/oraInst.loc $ORACLE_HOME/oraInst.loc.bak
    cp /var/opt/oracle/oraInst.loc $ORACLE_HOME/oraInst.loc
  5. Run Oracle Universal Installer from your Oracle home as below, (using your site specific directory location and Oracle home name):

    cd $ORACLE_HOME/oui/bin
    ./runInstaller -silent -attachHome ORACLE_HOME="/u01/oracle/product/10.2" ORACLE_HOME_NAME="Ora10gHome"
  6. Check the inventory output is correct for your Oracle home:

          $ORACLE_HOME/OPatch/opatch lsinventory -detail
  7. If the table at the beginning of the output is showing the proper directories, and the Oracle home components are properly reflected in the details, then the Global Inventory has been successfully created from the Local Inventory. At this time, you may patch an maintain your Oracle home, as normal.

How To Recreate the appsutil in AppsTier

1. On the Application Tier (as the APPLMGR user)

2. Log in to the APPL_TOP environment (source the environment file)

3. Create appsutil.zip file

4. perl /bin/admkappsutil.pl

5. This will create appsutil.zip in $APPL_TOP/admin/out .

6. On the Database Tier (as the ORACLE user):

7. Copy or FTP the appsutil.zip file to the RDBMS ORACLE_HOME

8. cd RDBMS_ORACLE_HOME

9. unzip -o appsutil.zip

10. Generate your Database Context File follow the steps below:
On UNIX cd . .env 
cd /appsutil/bin perl adbldxml.pl tier=db appsuser= appspasswd= 

11. On UNIX cd /appsutil/bin, run adconfig.sh contextfile= appspass= After running adconfig.sh contextfile= appspass= ,the scripts directory "/appsutil/scripts//" is created in appsutil directory.

12. After running adconfig.sh contextfile= appspass= ,the scripts directory "/appsutil/scripts//" is created in appsutil directory

TimeCardHome.jsp and TimeCardLogout.jsp generates leak connections in 12.1.3

Leaks are being generated by timecard, the AOL-J JDBC Pool Status shows many leaked connections like:

oracle.apps.fnd.security.LeakedConnectionException 3, 0x73ebd7, 2011-04-18+06:57:47.199+0100, Thread[AJPRequestHandler-ApplicationServerThread-5,5,RequestThreadGroup]
at oracle.apps.fnd.security.CallStack.getInstance(CallStack.java:109)
at oracle.apps.fnd.security.DBConnObj.setBorrowingThread(DBConnObj.java:981)
at oracle.apps.fnd.security.DBConnObj.setBorrowingThread(DBConnObj.java:964)
at oracle.apps.fnd.common.Pool.costBasedSelection(Pool.java:1885)
at oracle.apps.fnd.common.Pool.selectObject(Pool.java:1686)
at oracle.apps.fnd.common.Pool.borrowObject(Pool.java:950)
at oracle.apps.fnd.security.DBConnObjPool.borrowObject(DBConnObjPool.java:583)
at oracle.apps.fnd.security.AppsConnectionManager.borrowConnection(AppsConnectionManager.java:297)
at oracle.apps.fnd.common.Context.borrowConnection(Context.java:1719)
at oracle.apps.fnd.common.AppsContext.getPrivateConnectionFinal(AppsContext.java:2305)
at oracle.apps.fnd.common.AppsContext.getPrivateConnection(AppsContext.java:2242)
at oracle.apps.fnd.common.AppsContext.getJDBCConnection(AppsContext.java:2100)
at oracle.apps.fnd.common.AppsContext.getJDBCConnection(AppsContext.java:1910)
at oracle.apps.fnd.common.AppsContext.getJDBCConnection(AppsContext.java:1754)
at oracle.apps.fnd.common.AppsContext.getJDBCConnection(AppsContext.java:1767)
at oracle.apps.fnd.common.Context.getJDBCConnection(Context.java:1453)
at _TimecardLogout._jspService(_TimecardLogout.java:214)
at com.orionserver.http.OrionHttpJspPage.service(OrionHttpJspPage.java:59)
at oracle.jsp.runtimev2.JspPageTable.service(JspPageTable.java:335)
at oracle.jsp.runtimev2.JspServlet.internalService(JspServlet.java:478)
at oracle.jsp.runtimev2.JspServlet.service(JspServlet.java:401)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
at com.evermind.server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:64)
at oracle.apps.jtf.base.session.ReleaseResFilter.doFilter(ReleaseResFilter.java:26)
at com.evermind.server.http.EvermindFilterChain.doFilter(EvermindFilterChain.java:15)
at oracle.apps.fnd.security.AppsServletFilter.doFilter(AppsServletFilter.java:318)
at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:610)
at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:359)
at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:870)
at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:451)
at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:299)
at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:187)
at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260)
at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
at java.lang.Thread.run(Thread.java:595)


Verified Application leaked connections System administration->Diagnostic-> AOL J/ database connection Pool-> Leaked connections and found more than 400 connections are leaking.

Cause is The lock on the timecard still persists when the user clicks on Home from the create timecard page directly without submitting the timecard.

Solution:


Applications R12.1
To implement the solution, please execute the following steps:
1. Download and review the readme and pre-requisites for 12382630:R12.HXT.B DB CONNECTION LEAKS ON TIMECARDHOME.JSP.
No password is require to download this patch.

2. Ensure that you have taken a backup of your system before applying the recommended patch.

3. Apply the patch in a test environment.

4. Confirm the following file versions:
html/TimecardHome.jsp 120.0.12010000.4
html/TimecardLogout.jsp 120.1.12010000.2


You can use the commands like the following:
SQL> select f.filename,f.subdir,ver.version, ver.*, f.*
from ad_file_versions ver, apps.ad_files f
where f.file_id = ver.file_id
and f.filename like 'TimecardHome%'
SQL> select f.filename,f.subdir,ver.version, ver.*, f.*
from ad_file_versions ver, apps.ad_files f
where f.file_id = ver.file_id
and f.filename like 'TimecardLogout%'

5. Retest the issue.

6. Migrate the solution as appropriate to other environments.

http://allappsdba.blogspot.in/

After database bounce apps password is not working in cloned instance 11g database "ERROR: ORA-01017: invalid username/password; logon denied"

After database bounce apps password is not working in cloned instance 11g database "ERROR: ORA-01017: invalid username/password; logon denied"
After database bounce apps password is not working in cloned instance.

Issue:

On database node not able to connect to sqlplus as apps schema
=============================================

oratest#ctsst $ sqlplus apps/apps#man

SQL*Plus: Release 11.1.0.7.0 Production on Sun Apr 7 23:06:33 2016 ==>: database side

Copyright (c) 1982, 2011, Oracle. All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied


Tried From other host still not able to connect to sqlplus as apps schema
===================================================
apptest#ctssat $ sqlplus apps/apps#man

SQL*Plus: Release 8.0.6.0.0 - Production on Sun Apr 7 23:06:22 2016  ==> forms

(c) Copyright 1999 Oracle Corporation. All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied


Enter user-name:


Work around/solution
==
SQL> show parameter sec

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_securefile string PERMITTED
optimizer_secure_view_merging boolean FALSE
sec_case_sensitive_logon boolean TRUE
sec_max_failed_login_attempts integer 10
sec_protocol_error_further_action string CONTINUE
sec_protocol_error_trace_action string TRACE
sec_return_server_release_banner boolean FALSE
sql92_security boolean FALSE
SQL> alter system set sec_case_sensitive_logon=FALSE;

System altered.

SQL>


after doing above steps apps password is working fine!!!!!!!!!!!!!
http://allappsdba.blogspot.in/

Purge Fnd_AQ Tables


Indications

1. The Internal standard manager take a long time to startup

2. The table AQ$_FND_CP_GSM_OPP_AQTBL table grow up enormously and has a lot of records

3. The following message is shown for a concurrent request of a bi publisher report:

    Post-processing of request xxx failed with the error message:
    ORA-04021: timeout occurred while waiting to lock object


Actions:

Improper shutdown of the concurrent managers especially the OPP manager.


AQs tables are used to look for "subscriptions" by FNDSMs. That is, when ICM calls for FNDSM to start, they "subscribe" to this queue to identify its status. The time taking for the  process cleanup prior to the ICM starting up the regular CMs is correlated to the number of processes that were not stopped cleanly. In case of un-clean shutdown, the process to restart will be longer as manager spends extra cycles to perform housekeeping tasks.


SOLUTION

1. It is highly recommended to always ensure the clean shutdown of the concurrent managers.

2. The Purge Concurrent Request and/or Manager Data Program request should run periodically. Purge concurrent request does not Purges AQ Tables. For maintaining a healthy level of records in fnd_concurrent_requests instead of running 'Purge Concurrent Program' with same parameters for all the applications you can choose to run it differently for different application where you can decide for which application you needs records to be kept for certain days and for which application you only need to keep for 1 day or so.

It is recommended to schedule a cron job or something which queries records in APPLSYS.FND_CP_GSM_OPP_AQTBL to monitor it and use DBMS_AQADM.PURGE_QUEUE_TABLE to purge the table as needed.

3. This is how to purge the FND_CP_GSM_OPP_AQTBL manually to clean up the table:

3.1. Check the number of records in the table :

SQL> select count(*) from applsys.FND_CP_GSM_OPP_AQTBL ;

 COUNT(*)
 ----------
 31759

3.2. Run the following as SYS:

DECLARE
po dbms_aqadm.aq$_purge_options_t;
BEGIN
po.block := FALSE;
DBMS_AQADM.PURGE_QUEUE_TABLE(
queue_table => 'APPLSYS.FND_CP_GSM_OPP_AQTBL',
purge_condition => NULL,
purge_options => po);
END;
/
3.3. Re-check again the number of records in the table

SQL> select count(*) from applsys.FND_CP_GSM_OPP_AQTBL ;

 COUNT(*)
 ----------
 0
In case the purge did not complete successfully after the second time, or did not purge all the queues, then you would have to recreate the queue. Recreate the queue using  $FND_TOP/patch/115/sql/afopp002.sql file as 'APPLSYS' user.  On running the script you will be prompted for username and password. Please note that this may take a longer length of time to complete.

3.4. Please run the Concurrent Manager Recovery feature to address any Concurrent Manager / Concurrent Processing issues within the Oracle Application Manager.

3.5. Start up the application services and retest.



N.B the same solution can be applied to clean up the following tables

AQ$_FND_CP_GSM_OPP_AQTBL_S;
applsys.AQ$_FND_CP_GSM_OPP_AQTBL_T;
applsys.AQ$_FND_CP_GSM_OPP_AQTBL_H;
applsys.AQ$_FND_CP_GSM_OPP_AQTBL_I;
applsys.AQ$_FND_CP_GSM_OPP_AQTBL_G

http://allappsdba.blogspot.in/

Workflow Mailer not Sending Notifications java.net.ConnectException: Connection refused (errno:239)


After the Mailer is configured, one is unable to send email notifications with framework regions they fail and the following error message can be seen in the log files:

Caused by: oracle.apps.fnd.wf.mailer.NotificationFormatter$FormatterSAX
Exception: Problem obtaining the HTML content -> oracle.apps.fnd.wf.common.HTTPC
lientException: Unable to invoke method HTTPClient.HTTPConnection.Get caused by:
java.net.ConnectException: Connection refused (errno:239)


The value of the "WF: Workflow Mailer Framework Web Agent" [WF_MAIL_WEB_AGENT] profile is not correctly set to point to the FQDN of the Web Server.
This needs to be set to http or https://<host>.<domain>:<port>



SOLUTION

1. Stop the Workflow Mailer and Agent Listener service.

2. Change the "WF: Workflow Mailer Framework Web Agent" [WF_MAIL_WEB_AGENT] profile option to point to the physical Web Server host address - http or https://<host>.<domain>:<port>

3. Rebuild mailer queue with:

SQL>@$FND_TOP/patch/115/sql/wfntfqup.sql APPS <APPS schema passwd> APPLSYS

4. Start Workflow Mailer and Agent Listener service.

5. Test with a new notification.

How To Rebuild Mailers Queue

In some cases, mailer's queue (WF_NOTIFICATION_OUT) may be corrupted or inconsistent, and mailer may not be able to send the email notifications as it should.
It is inconsistent when, for instance, each notification in OPEN / MAIL status in table WF_NOTIFICATIONS, does not have a corresponding message in "Ready" status in mailer's queue.
Another case is when you changed profile option "WF: Workflow Mailer Framework Web Agent", but messages in mailer's queue still refer to the old value.
In such situations, mailer's queue must be rebuilt.
The goal of that Note is to provide instructions how to rebuild the mailer's queue.

Solution:
1. Stop Workflow Agent Listeners and Mailers.

2. Find the tablespace where indexes on corrid for queue tables are created:
select distinct  tablespace_name from dba_indexes,dba_queues
where index_name like 'WF%N1' and table_name=queue_table
and name like 'WF%'; 

3.It is showing which notifications are ready to be e-mailed when Workflow Notification Mailer is started.
Updating the notification so that it will not be e-mailed is done by setting the MAIL_STATUS = 'SENT' which will cancel the job at startup. Also, need to provide the item type desired.
(WFERROR messages)
SQL> update WF_NOTIFICATIONS set mail_status = 'SENT' where mail_status = 'MAIL' and message_type='WFERROR';

4.Rebuild mailer's queue: 

sqlplus apps/<apps_pwd> @$FND_TOP/patch/115/sql/wfntfqup APPS <apps_pwd> APPLSYS


5.Recreate Index on corrid column: 


sqlplus applsys/<pwd> 

CREATE INDEX WF_NOTIFICATION_OUT_N1 
ON WF_NOTIFICATION_OUT(CORRID)
STORAGE (INITIAL 1M NEXT 1M MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0)
TABLESPACE &tbs; 

6.Check for invalids, if yes, recompile to make sure 0 invalids

select owner,object_name,object_type from dba_invalid_objects;

OWNER                          OBJECT_NAME          OBJECT_TYPE
------------------------------ -------------------- -------------------
APPS                           WF_DIAGNOSTICS       PACKAGE BODY
APPSREAD                       WF_NOTIFICATION_OUT  SYNONYM
APPS                           WF_NOTIFICATION_OUT  SYNONYM

as a sysdba----
alter SYNONYM apps.WF_NOTIFICATION_OUT compile;

alter PACKAGE apps.WF_DIAGNOSTICS compile body;
alter SYNONYM APPSREAD.WF_NOTIFICATION_OUT compile;

7.Start Workflow Agent Listeners and Mailers.

Saturday, 16 April 2016

Enable Audit Trail for GL_PERIOD_STATUS

Enable Audit Trail
1        Enable system profile option “Audit Trail:Active=yes”
2        Need to define audit trail group
Navigation: Security -> AuditTrail -> Groups       

3        Now define audit installations:
Navigation: Security -> AuditTrail -> Install
GL oracle username already audit enabled.


4        Navigation: Security -> AuditTrail ->Table
Query for GL_PERIOD_STATUS table
5        Run Concurrent program 'AuditTrail Report for Audit Group Validation' with parameter as that of Audit Group

   Audit Group Name is Accounting Period

Select “Accounting Period”

6        Run concurrent program "AuditTrail Update Tables".
7        Define Template ,Go to Audit Trail-Audit Trail Reporting-Audit Industry Template


8        Run Audit Report, Go to Security-Audit Trail-Audi Trail Reporting-Audit Report and update Parameter values then run the report.




Friday, 15 April 2016

Temporary tables XLA_GLT_%used in the GL Transfer process are no longer cleared down once the transfer is complete.

Need to clear all temporary tables to reclaim space in database.

These tables are named XLA_GLT_% where % is the group_id used in the General Ledger Transfer.

Solution:
Register the new concurrent program as follows:
Login to the application as System Administrator responsibility.
Nav:Security/Responsibility/Request
Open the Request Groups form.
Query for the 'All Reports' Request Group for the respective Application ( like payables, receivables etc ).
Add the program "Purge Temporary Journal Import Interface Tables Created By Subledger Accounting" (Subledger Accounting application) to the above request group.
Save.
Run the new concurrent program XLADRPGLT: Purge Temporary Journal Import Interface Tables Created By Subledger Accounting

Please ensure to purge last year financial year.
Show less