Tuesday, November 2, 2021
Sessions
select count(1) FROM DBA_HIST_ACTIVE_SESS_HISTORY
where to_char(sample_time,'DD-MON-YYYY HH:MI:SS') >= '30-OCT-2021 09:00:00'
and to_char(sample_time,'DD-MON-YYYY HH:MI:SS') <= '30-OCT-2021 22:00:00'
Thursday, September 16, 2021
EmbeddedLDAPBackup.zip grows huge under weblogic
EmbeddedLDAPBackup.zip grows huge under weblogic
path : /apps/middleware/weblogic/XXXXXXX_domain/servers/AdminServer/data/ldap/backup
file size of EmbeddedLDAPBackup.zip some it might be in GB's
solution 1. 1- Stop the Admin/weblogic serivce
2- Take the backup of the ldap files -- path /apps/middleware/weblogic/base_domain/servers/AdminServer/data/ldap/
cp -r ldapfiles ldapfiles_bkp
3- copy the ldap files form othe machine of the same domin
example : /apps/middleware/weblogic/base_domain/servers/Machin1/data/ldap
cp -r ldapfiles /apps/middleware/weblogic/base_domain/servers/AdminServer/data/ldap/
in the admin ldap remeove .lok file
start the Admin/weblogic service
solutoin 2:
1.make a copy of DOMAIN/bin/startWebLogic.sh file
2. add below line in DOMAIN/bin/startWebLogic.sh file :
JAVA_OPTIONS="${JAVA_OPTIONS} -Dweblogic.security.ldap.maxSize=31457280"
export JAVA_OPTIONS
NOTE: Add above lines before the following lines
SAVE_JAVA_OPTIONS=""
CLASSPATH="${SAVE_CLASSPATH}"
SAVE_CLASSPATH=""
3. stop & start Admin server
Friday, August 20, 2021
Disk space alert creation script
#!/bin/bash
###########################################################
# This script is to check the space on the Weblogic Mount
###########################################################
CURRENT=$(df /apps | grep / | awk '{ print $5}' | sed 's/%//g')
THRESHOLD=90
if [ "$CURRENT" -gt "$THRESHOLD" ] ; then
mail -s 'Disk Space Alert' XXXXXXXXXXXXX@YYYYYYYY.com << EOF
Your Servername disk remaining free space is critically low. Used: $CURRENT%
EOF
fi
Wednesday, June 16, 2021
coe_xfr_sql_profile
Example: SQL Tuning Task Options (Doc ID 2461848.1)
Good Plan Hash Value Not Showing in One of the RAC Node for Sqlid Even After Forcing Plan (Doc ID 2422536.1)
Encouraging CBO to Pickup a Better Execution Plan Using the COE XFR SQL Profile Script (Doc ID 1955195.1)
Monday, June 14, 2021
Is the Script CMCLEAN.SQL Supported in EBS R12?
(Doc ID 2183718.1)
The script CMCLEAN.SQL is NOT recommended or supported with E-Business Suite (EBS) Release 12.
The recommended and supported tools for EBS R12 and Oracle Applications Release 11i is the Concurrent Manager Recovery Wizard and/or the newly released CPADMIN utility.
Details of the "Concurrent Manager Recovery Wizard" is documented within the manual "Oracle E-Business Suite System Administrator's Guide - Maintenance Release 12.1" (Part No. E12894-04), Chapter 7, Diagnostic and Repair in Oracle Applications Manager: Concurrent Manager Recovery', page 7-4
Details of the CPADMIN utility is documented within the following notes:
- 'Note 2319585.1 R12 E-Business Suite Concurrent Processing Script Cpadmin.sh Command Line Utility Which Consolidates Existing CP Tools Into A Single Menu-Based Shell Script'
- 'Note 2084405.1 Oracle Application Object Library Release Notes For Release 12.1 Concurrent Processing RUP2 Patchset:
"What's New...Concurrent Processing Command-Line Utility - This command-line utility consolidates various existing utilities into a single menu-based utility called "cpadmin"...This adadmin-style utility can be used for multiple tasks, including:
* Manager Status: Use this option to display the statuses of all managers.
* Clean CP Tables: Use this option to clean up the concurrent processing tables. This utility replaces cmclean.sql.
* Set Manager Diagnostics: Turn diagnostics on or off for individual managers with this option.
* Manager Control: Use this option to send a request such as start, stop, or verify to an individual manager.
* Rebuild Manager Views: Use this option to rebuild the FND_CONCURRENT_WORKER_REQUEST and FND_CONCURRENT_CRM_REQUESTS views.
* Move Request Files: Change request log and output file locations with this option.
* Analyze Request: Use this option to analyze a concurrent request."
As per Bug 17198256 CMCLEAN.SQL SHOULD TAKE CARE OF RESCHEDULING TERMINATED CONCURRENT REQUESTS, the script cmclean.sql was designed by Concurrent Processing (CP) Development for Oracle Applications Release 10.7 for internal use; however, it was unofficially distributed by Support--an effective script with Apps Release 10.7 and 11.0.
Oracle Applications 11i and E-Business Suite (EBS) R12 has evolved substantially and the script "cmclean.sql" is not any longer available, recommended, or supported.
R12.2 How to Enable Form FRD Trace Step by Step?
REF : (Doc ID 2567155.1)
R12.2 How to Enable Form FRD Trace Step by Step?
SOLUTION
Steps is only for EBS R12.2 instance :
1.) Go to the specific concerned form window, click (menu) Help > About Oracle Applications, find out value to FORMS_TRACE_DIR and Forms Process ID:
FORMS_TRACE_DIR: scroll to section of ‘Forms Server Environment Variables’
Forms Process ID: scroll to section of ‘Forms Server’
NOTE: If the Forms Process ID is not visible, then make sure that the system profile 'FND: Diagnostics' is set to YES as documented in Note 438274.1
2.) Source the instance app tier environment file /…/EBSapps.env, and go to directory of FORMS_TRACE_DIR
3.) Run command : kill –USR1
Thursday, January 21, 2021
PT
--- how to find SQL plan changes
select to_char(min(s.end_interval_time),
'YYYY-MM-DD HH24:MI'
) sample_end
--, s.snap_id
, q.sql_id
, q.plan_hash_value
, q.instance_number inst_id
, sum(q.EXECUTIONS_DELTA) execs9
, round((sum(ELAPSED_TIME_delta)/greatest(sum(executions_delta),
1
)/
1000
/
1000
),
3
) ela_sec
, round((sum(CPU_TIME_delta)/greatest(sum(executions_delta),
1
)/
1000
/
1000
),
3
) cpu_sec
, round((sum(rows_processed_delta)/greatest(sum(executions_delta),
1
)),
1
) avg_rows
, round((sum(fetches_delta)/greatest(sum(executions_delta),
1
)),
1
) avg_fetches
, round(sum(BUFFER_GETS_delta)/greatest(sum(executions_delta),
1
)) lio_per_exec
, round(sum(DISK_READS_delta)/greatest(sum(executions_delta),
1
)) pio_per_exec
, round((sum(iowait_delta)/greatest(sum(executions_delta),
1
)/
1000
/
1000
),
1
) iowait_sec
-- , round((sum(clwait_delta)/greatest(sum(executions_delta),
1
)/
1000
),
1
) clwait_sec
-- , round((sum(apwait_delta)/greatest(sum(executions_delta),
1
)/
1000
),
1
) apwait_sec
-- , round((sum(ccwait_delta)/greatest(sum(executions_delta),
1
)/
1000
),
1
) ccnwait_sec
from dba_hist_sqlstat q, dba_hist_snapshot s
where q.SQL_ID=trim(
'&1'
)
and s.snap_id = q.snap_id
and s.dbid = q.dbid
and s.instance_number = q.instance_number
and s.end_interval_time between ( sysdate - &
2
) and ( sysdate - &
2
+ &
3
)
group by s.snap_id, q.sql_id, q.plan_hash_value, q.instance_number
order by s.snap_id, q.sql_id, q.plan_hash_value, q.instance_number
-- if you find a bad plan then try purging it
select inst_id,plan_hash_value,
'exec sys.DBMS_SHARED_POOL.PURGE ('
''
||ADDRESS||
', '
||HASH_VALUE||
''
', '
'C'
');'
from gV$SQL where sql_id in (
'1pfb281d5f37c'
) order by plan_hash_value;
-- To check stats and collect stats if stale
SELECT a.object_owner,
a.object_name,
a.object_type,
b.stale_stats
FROM v$sql_plan a
JOIN dba_tab_statistics b
ON a.object_owner = b.owner AND a.object_name = b.TABLE_NAME
WHERE b.object_type =
'TABLE'
AND a.sql_id = TRIM (
'&1'
)
AND (stale_stats IS NULL OR stale_stats =
'YES'
)
UNION ALL
SELECT a.object_owner,
a.object_name,
a.object_type,
b.stale_stats
FROM v$sql_plan a
JOIN dba_tab_statistics b
ON a.object_owner = b.owner AND a.object_name = b.PARTITION_NAME
WHERE b.object_type =
'PARTITION'
AND a.sql_id = TRIM (
'&1'
)
AND (stale_stats IS NULL OR stale_stats =
'YES'
)
UNION ALL
SELECT a.object_owner,
a.object_name,
b.object_type,
b.stale_stats
FROM v$sql_plan a
JOIN dba_ind_statistics b
ON a.object_owner = b.owner AND a.object_name = b.index_name
WHERE b.object_type!=
'SUBPARTITION'
and a.sql_id = TRIM (
'&1'
)
AND (stale_stats IS NULL OR stale_stats =
'YES'
);
-- if you decide to create a SQL baseline after all analysis
SELECT * FROM table(DBMS_XPLAN.DISPLAY_AWR(
'17n0jry7qhtdx'
));
-- look up AWR history to find SQL detailed statistics. Pick up the snapshots in which the SQL has had a good or better plan.
/* Formatted on 1/23/2018 10:06:30 AM (QP5 v5.300) */
SELECT ss.snap_id,
ss.instance_number,
sn.end_interval_time,
ss.plan_hash_value,
ss.elapsed_time_delta /
1000
/
1000
/ ss.executions_delta
avg_time_per_run,
elapsed_time_total /
1000
/
1000
total_elapsed_time,
ss.executions_delta exe_times,
ss.fetches_delta,
ss.rows_processed_delta processed_rows,
iowait_delta / ss.executions_delta per_IO_wait,
ss.cpu_time_delta / ss.executions_delta Per_CPU_time,
disk_reads_delta / ss.executions_delta Per_disk_R,
buffer_gets_delta / ss.executions_delta Per_buffer_R
FROM dba_hist_sqlstat ss, dba_hist_snapshot sn
WHERE ss.snap_id = sn.snap_id
AND ss.instance_number = sn.instance_number
AND ss.sql_id =
'&sql_id'
AND ss.executions_delta >
0
ORDER BY ss.snap_id DESC,
1
;
-- snap_id=
121097
-- create a sql tuning set. You can name it to whatever that is easy to find its purpose.
set serveroutput on
exec dbms_sqltune.create_sqlset(sqlset_name =>
'&SQL_set_name'
,description =>
'&SQL_set_name'
);
-- load SQLs that you want to tune in the above created SQL tuning set. Enter the snapshot ID in which the target sql had a good plan. Enter the ID of the SQL.
set serveroutput on
declare
baseline_ref_cur DBMS_SQLTUNE.SQLSET_CURSOR;
begin
open baseline_ref_cur
for
select VALUE(p) from table(
DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY(&snapshot_start, &snapshot_end,
'sql_id='
||CHR(
39
)||
'&sql_id'
||CHR(
39
)||
''
,NULL,NULL,NULL,NULL,NULL,NULL,
'ALL'
)) p;
DBMS_SQLTUNE.LOAD_SQLSET(
'&SQL_set_name'
, baseline_ref_cur);
end;
/
declare
my_int pls_integer;
begin
my_int := dbms_spm.load_plans_from_sqlset (
sqlset_name =>
'&SQL_set_name'
,
basic_filter =>
'sql_id='
'&sql_id'
''
,
sqlset_owner =>
'AIUDKC3'
,
fixed =>
'NO'
,
enabled =>
'YES'
);
DBMS_OUTPUT.PUT_line(my_int);
end;
/
select * from dba_sql_plan_baselines order by created desc;
-- Below is mostly not needed but if you came to a conclusion and looking to copy SQL profile from one env. to other env.
-- Run it in theta from where you want to export sql profile
EXEC DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF (table_name =>
'PROFILE_STGTAB_86dxp1c32fd42'
);
EXEC DBMS_SQLTUNE.PACK_STGTAB_SQLPROF (profile_name =>
'SYS_SQLPROF_036f7c7ee4560000'
, staging_table_name =>
'PROFILE_STGTAB_86dxp1c32fd42'
);
select * from PROFILE_STGTAB;
-- export and improt above table
expdp tables=PROFILE_STGTAB_86dxp1c32fd42 dumpfile=PROFILE_STGTAB_86dxp1c32fd42.dmp directory=scripts;
impdp tables=PROFILE_STGTAB_86dxp1c32fd42 dumpfile=PROFILE_STGTAB_86dxp1c32fd42.dmp directory=conv_dir;
-- ON target database
EXEC DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF (table_name =>
'PROFILE_STGTAB'
);
select * from PROFILE_STGTAB; -- PROFILE_6zhhcaxya0z18
EXEC DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF(replace=>
true
,staging_table_name =>
'PROFILE_STGTAB'
);
--- if there is no good plan in AWR snaps and you found a good baseline in some other env. and decide to copy here
select * from dba_sql_plan_baselines where plan_name=
'SQL_PLAN_7mduu8xwt7wzx8c7c867d'
;
-- create staging table on source
BEGIN
DBMS_SPM.CREATE_STGTAB_BASELINE (table_name =>
'SPB_86dxp1c32fd42'
,
table_owner =>
'AIUDKC3'
,
tablespace_name =>
'USERS'
);
END;
-- packing SQL baseline to staging table
DECLARE
my_plans NUMBER;
BEGIN
my_plans :=
DBMS_SPM.PACK_STGTAB_BASELINE (
table_name =>
'SPB_86dxp1c32fd42'
,
enabled =>
'yes'
,
table_owner =>
'AIUDKC3'
,
plan_name =>
'SQL_PLAN_7mduu8xwt7wzx8c7c867d'
,
sql_handle =>
'SQL_79b75a477993f3fd'
);
END;
/
select * from SPB_86dxp1c32fd42;
-- export and improt above table
select * from dba_directories where directory_name=
'EXPDP_DUMP'
;
expdp tables=SPB_86dxp1c32fd42 dumpfile=SPB_86dxp1c32fd42.dmp directory=EXPDP_DUMP;
impdp tables=SPB_86dxp1c32fd42 dumpfile=SPB_86dxp1c32fd42.dmp directory=EXPDP_DUMP;
-- unpack sql baseline
/* Formatted on 4/2/2020 10:24:44 AM (QP5 v5.326) */
SET SERVEROUTPUT ON
DECLARE
l_plans_unpacked PLS_INTEGER;
BEGIN
l_plans_unpacked :=
DBMS_SPM.unpack_stgtab_baseline (table_name =>
'SPB_86dxp1c32fd42'
,
table_owner =>
'AIUDKC3'
);
DBMS_OUTPUT.put_line (
'Plans Unpacked: '
|| l_plans_unpacked);
END;
/
select inst_id,plan_hash_value,
'exec sys.DBMS_SHARED_POOL.PURGE ('
''
||ADDRESS||
', '
||HASH_VALUE||
''
', '
'C'
');'
from gV$SQL where sql_id in (
'86dxp1c32fd42'
) order by plan_hash_value;
Subscribe to:
Posts (Atom)
user level export and import
expdp parfile=PLCT170.par oracle@uslp123sd7dfcvxsza > more PLCT050.par userid= "/ as sysdba" dumpfile=T050.dmp logfile=expdpT0...
-
1) For new table creations - after creating the table in custom schema execute below script to generate editoning view and synonym for it in...
-
1) To hold the specific concurrent program Hold update fnd_concurrent_requests set HOLD_FLAG='Y' where PHASE_CODE='P'...
-
expdp parfile=PLCT170.par oracle@uslp123sd7dfcvxsza > more PLCT050.par userid= "/ as sysdba" dumpfile=T050.dmp logfile=expdpT0...