An enterprise messaging system enables applications to communicate with one another through the exchange of messages. A message is a request, report, and/or event that contains information needed to coordinate communication between different applications. A message provides a level of abstraction, allowing you to separate the details about the destination system from the application code.
The Java Message Service (JMS) is a standard API for accessing enterprise messaging systems. Specifically, JMS

WebLogic Server Domains

WebLogic Server Domains

The basic administrative unit for a WebLogic Server installation is called a domain. A domain is a logically related group of WebLogic Server resources that you manage as a unit. A domain always includes only one instance of WebLogic Server called the Administration Server. The Administration Server serves as a central point of contact for server instances and system administration tools. A domain may also include additional WebLogic Server instances called Managed Servers.
You can configure some or all of these Managed Servers to be part of a WebLogic Server cluster. A cluster is a group of WebLogic Server instances that work together to provide scalability and high-availability for applications. A Managed Server in a cluster can act as a backup for services such as JMS and JTA that are hosted on another server instance in the cluster. Your applications are also deployed and managed as part of a domain.
A Managed Server can also function as a virtual host.
You can organize your domains based on criteria such as:
  • Logical divisions of applications. For example, a domain devoted to end-user functions such as shopping carts and another domain devoted to back-end accounting applications.
  • Physical location. Domains for different locations or branches of your business.
  • Size. Domains organized in small units that can be managed more efficiently, perhaps by different personnel.
For more information about WebLogic Server domains, see:
    WebLogic Server Domain
  • Overview of WebLogic Server Domains
  • Creating and Configuring Domains Using the Configuration Wizard
  • Configuring Domains in the Administration Console 
  • Figure 1-1 WebLogic Server Domain

Figure 1-1 depicts a possible configuration of a WebLogic Server Domain—one of many possible configurations.
In the depicted domain, there are three physical machines:
Machine A hosts one instance of WebLogic Server, the Administration Server. The System Administration Tools communicate with the Administration Server to perform configuration and monitoring of the servers and applications in the domain. The Administration Server communicates with each Managed Server on behalf of the System Administration Tools. The configuration for all the servers in the domain is stored in the configuration repository, the config.xml file, which resides on the machine hosting the Administration Server.
Machines B and C each host two instances of WebLogic Server, WebLogic Servers 1 through 4. These instances are called Managed Servers. The Administration Server communicates with an instance of Node Manager running on each machine to control startup and shutdown of the Managed Servers.
WebLogic Servers 2 and 4 are part of a WebLogic Cluster (outlined in red). This cluster is running an application that responds to HTTP requests routed to the cluster from a hardware load balancer. (An instance of WebLogic Server or a third-party Web server with one of the WebLogic Server plug-ins can also provide load balancing.) The load balancer processes HTTP requests from the Internet after they have passed through a firewall. The load balancer and firewall are not part of the domain. A replicated copy of objects such as HTTP sessions is passed between the two cluster members to provide failover capability.
WebLogic Server 1 runs an application that uses Java Database Connectivity (JDBC) to access a database server running on another physical machine that is not part of the WebLogic Domain.
Note: The pictured domain is only intended to illustrate the concepts of a WebLogic Server domain and how you manage the domain. Many possible configurations of servers, clusters, and applications are possible in a WebLogic Server domain.

System Administration Infrastructure

System administration infrastructure in WebLogic Server is implemented using the Java Management Extension (JMX) specification from Sun Microsystems. The JMX API models system administration functions with Java objects called MBeans. Knowledge of this implementation as described in this discussion of system administration infrastructure is not necessary to manage a WebLogic Server domain.
There are three types of MBeans used to manage a WebLogic Server domain: administrationconfiguration, and runtime Mbeans.
Administration Mbeans contain a set of attributes that define configuration parameters for various management functions. All attributes for administration MBeans have pre-set default values. When the Administration Server starts, it reads the domain configuration file (called config.xml) and overrides the default attribute values of the administration MBeans with any attribute values found in the config.xml file.
The config.xml file, located on the machine that hosts the Administration Server, provides persistent storage of Mbean attribute values. Every time you change an attribute using the system administration tools, its value is stored in the appropriate administration MBean and written to the config.xml file. Each WebLogic Server domain has its own config.xml file.
Each time the Administration Server starts successfully, and each time you modify the configuration, a backup configuration file is created. You can configure the number of back up copies ofconfig.xml retained by the Administration Server. For more information, see Backing up config.xml.
If you set any configuration attributes on the command line when you start the Administration Server with -D arguments, these values override the values set by the defaults or those read from the config.xml file. These overridden values are also persisted to config.xml file by the Administration Server. For more information about these command-line arguments, see Configuring Servers in the Administration Console Online Help.
Configuration Mbeans are copies of Administration Mbeans that each Managed Server uses to initialize its configuration. When you start a Managed Server, the server receives a copy of all the administration MBeans configured on the Administration Server and stores them in memory as configuration MBeans. If you override any configuration attributes when starting a Managed Server, those values override the values received from the Administration Server but are not written to the config.xml file. For more information about starting a Managed Server, see Starting Managed Servers in the Administration Console Online Help.
Runtime Mbeans contain sets of attributes consisting of runtime information for active WebLogic Servers instances and applications. By retrieving the values of attributes in these runtime MBeans, you can monitor the running status of a WebLogic Server domain.
Mbeans may also contain operations used to execute management functions.
Although users with a knowledge of these Mbeans and the JMX API can create their own customized management system, most users prefer to use the system administration tools provided with WebLogic Server to perform these tasks. These tools do not require knowledge of the JMX API. For more information, see System Administration Tools.

Introduction to System Administration

You manage a WebLogic Server installation by using any of several system administration tools provided with WebLogic Server. A WebLogic Server installation can consist of a single WebLogic Server instance or multiple instances, each hosted on one or more physical machines. The system administration tools include the Administration Console, command line utilities, and an API, with which you manage security, database connections, messaging, transaction processing, and the runtime configuration of your applications. The tools also allow you to monitor the health of the WebLogic Server environment to ensure maximum availability and performance for your applications.

Finding latest Patchset for OFMW SOA components !!!

Finding latest Patchset for OFMW SOA components !!!

Oracle recommend to patch SOA environments with latest patch set whenever it’s available. Its SOA administrator responsibility to apply latest patch set in all SOA environments time to time in order to make sure Oracle product remain to latest patch set levels. 

In each patch set there is list of bug which Oracle fixes, often I have seen if we raise a service ticket in Oracle for any specific bug, they will ask straight away what version of product component getting used.

They also ask to provide inventory logs for each Oracle product components e.g. SCA, OSB, WLS and Oracle_COMMON. 

In below blog I will show very simple example how to check current product level version of SOA bundle and find latest patchset available for same component.

1.1. SCA aks BPEL Component Current Patch set version Verification


Run the lsinventory command for Oracle_SOA1 home and get the installed Patch Number.

-bash-3.2$ export ORACLE_HOME=/u01/app/oracle/product/fmw/Oracle_SOA1
-bash-3.2$ cd /u01/app/oracle/product/fmw/Oracle_SOA1/OPatch
-bash-3.2$ ./opatch lsinventory -InvPtrLoc /u01/app/oracle/product/fmw/Oracle_SOA1/oraInst.loc -details


Now, login into >> under patch & Update tab page >> provide patch number which we have got from above steps and click on search button.  


The below result will show us which particular patch set number this patch 19953598 is. It this case its SOA Bundle Patch

1.2. SOA Suite Latest Patch Set Search

In above steps we have confirmed which particular SOA Bundle PATCH set been already installed, now next steps to find out if there is any new patch set released after the one which is already installed. 


Login into >> Patches & Update >> Product = Oracle SOA Suite >> Release = FMW >> Platform = <As per your environment>
Click on search button-


The result which will appear will show what are all the patchset has been released. In below snap you could notice that next SOA Bundle has been release whose version is ‘SOA Bundle PATCH


Download the patch and apply in test environment, do regression test to verify all key business transactions, if it works apply into Production environment. Refer the patch readme file for detailed instructions. 

Likewise, above steps we can verify existing patch version of Oracle Service Bus (OSB) Component,  Weblogic Server component and find latest patch set released (if any) and apply into SOA environments.

1.3. Finding Weblogic Component Installed Patch Set Version 

There are two ways we can check the WLS installed product version-


cd /u01/app/oracle/product/fmw/utils/bsu
./ -prod_dir=/u01/app/oracle/product/fmw/wlserver_10.3 -status=applied -verbose –view


-bash-3.2$ export WL_HOME=/u01/app/oracle/product/fmw/wlserver_10.3
-bash-3.2$ . $WL_HOME/server/bin/
-bash-3.2$ java weblogic.version

Find the WLS latest Patch set in similar ways as we have found for OSB and SCA components. 

Relation between Weblogic Data Sources DB Sessions Configuration with DB Instance session and process in RACK infrastructure !!!

In this post I will try to summarize the outcome of one POC which I did as part of an incident raised for DB connection failure error.

Scenarios -

SOA Servers were not able communicate with DB properly, business transactions were getting failed intermittently, when I looked into SOA WLS Servers logs, it was showing below error messages-

“ORA-12516, TNS: listener could not find available handler with matching protocol stack”

Later, I tried testing DB connection using sqlplus utility and there also I got same message-

Even, SQL Developer tool also was throwing same error –

When I contacted to DBA, he told even he is not able to login into Database coz of configure DB session and process got exhausted on the DB vServer.

After getting this reason, I stopped couple of Weblogic servers as we have clustered environment and shutting down couple of WLS servers does not impact business transactions.

After shutting down WLS servers, DB session and process count went down and DBA as able to do regular DBA administration on those nodes.

However, above situation has opened few questions for me as below stated –

1)      How much DB session and DB process count has been configured on each DB node?

2)      How much DB session are getting created using various Weblogic Data Sources?

3)      What is the relation between WLS Data sources Initial Capacity, Maximum Capacity, Minimum Capacity configuration with Database sessions count?

4)      What would be optimized connection configuration value for WLS data sources etc.?

5)      Does the WLS data sources sessions count get split between both RACK node equally or each RACK node get that many number of DB sessions?

Current WLS DS Configuration snap -


In order to explore answers of all above questions, I did some POC and engage DBA to understand the impact of my POC which it was creating at DB sessions count, so that we can conclude what’s going on.

In my test environment I was having 14 WLS servers running. I had shut down all of them expect ‘AdminServer’ to reduce the session count in DB Instance.  

In WLS Admin server below data sources were deployed and 30 was number configured for initial, minimum and maximum capacity. All three config parameter has same value.

Schema User Name
Max/Min/Initial Config Connection
Deployed Targets

Looking at DB, we have captured the below session stats –

select count(1), username, inst_id from gv$session;

Session Distribution Explanation -

1)      XXXPRE_MDS schema is getting sessions from two WLS data sources mds-owsmand mds-soa which has 30 sessions configured for Initial, Min and Max each. All config parameter has same value as per Oracle recommendation as result we could see XXXPRE_MDS schema user has 30 sessions initiated in each DB instance node. 15 each Weblogic Data sources. Which means WLS has equally distributed number of sessions to both DB rack node for a Data Source.

2)      XXXPRE_OPSS schema sessions count distributed 14 and 16 in DB instance 1 and 2 respectively. Which means it’s not mandatory that always WLS distribute equally sessions but it’s make sure it creates the number of sessions in Database as per configuration in WLS console. The count in WLS configuration and count in DB session in both DB instance node are matching.

3)      XXXPRE_SOAINFRA schema sessions similarly distributed 17 and 16 (total 33), while it should be 30, it  is because it has 3 extra sessions created by external application e.g. SQL developer


Based on above stats, I am concluding that if WLS Datasource has 30 connection configuration and we are using SCAN listener to connect to database running on RACK Infrastructure then total 30 sessions get distributes across both DB nodes. So, the thumb rule to define the count of session and process in DB is -

Session count in DB = WLS DS connection configuration * no. of WLS Servers deployed that DS  / No. of DB Instance in RACK

Session count in DB = 30 (WLS Config) * 3 OSB WLS Servers / 2 DB Instance in RACK = 45 DB sessions required for one WLS Data source.

Similarly do calculation for all available Weblogic Data sources and configure DB sessions count accordingly.

The best way to find out the no. of sessions required for database is to create an excel sheet, write down all application datasources names, initial, min and max connection configuration, target server name, so that you can conclude the number that your application needs these many DB connection and accordingly DBA has to set session and process count in each database instance.

Sample excel template to capture WLS session count to drive DB session count –

Schema User Name
Database Name
Max/Min/Initial Config Connection
Deployed Targets
No. Of WLS Servers
Total DB connection Required
Session Required at DB
Initial, Min and Max all are equal value
150 / Divided by no. of DB Instance in RACK

I hope this study will give meaningful way of calculating WLS Data source session configuration and accordingly increase or decrease session and process count at DB instance level. 

SOA Suite Components Data Purging !!!

SOA Suite Components Data Purging !!!

This post describe how to manage database growth for Oracle Fusion Middleware SOA Suite Backend Dehydration store (‘SOAINFRA‘ schema) which grows over the period of time a lots if not monitored regularly and will result major performance issue, disk out of space issue and increase latency etc.

This will increase the complexity to maintain SOA Database instance as well. 

To address this challenge, several methods for managing database growth are provided such as

1.       Running purge script
2.       Running parallel purge script
3.       Dropping partitioning if implemented
4.       Purging Data from EM console, B2B console and OSB Console.

In this article, I will only focus on how we can run Oracle out-of-box provided purge script to clean up the SOAINFRA schema tables which contains tables for various SOA Suite component such as Mediator, Workflow, BPEL/SCA and decision etc.

Note: B2B components records will not be purged using below script. If B2B implementation has been done then use this link to know who we can purge B2B Meta data and B2B Instance data. It’s very simple.

Fortunately, Oracle SOA Suite or previous versions provides out-of-box purge scripts. These scripts comes along with the setup and once you install the product it can be located at -

Pre-Implementation Verification

1)    You must need to take backup of SOA DB instance where purge script has to run, to make sure if something goes wrong we can restore the database easily.

2)      Run the below SQL query to check the current usage of tablespace for SOA DB instance, so later you can do comparison study whether tablespace size has been reduced or not

select tablespace_name,bytes/1024/1024/1024 space,maxbytes/1024/1024/1024 max,autoextensible from dba_data_files;

3)   Get the rows count of all the tables for SOA_INFRA schema using below command and then after purging run these sql again to compare how many tables and row has been purged. Before running the sql we need to run one db package to refresh the stats of tables rows.

Execute below SQL to refresh the tables row stats-

Run below sql to generate count of each tables rows.
select table_name, num_rows from dba_tables  where owner = 'AIOSIT_SOAINFRA' and num_rows is not NULL and  num_rows > 0 order by num_rows desc;

Implementation Steps

Follow the below steps to run the purging script

Open putty or command prompt

Change directory to $SOA_HOME\rcu\integration\soainfra\sql\soa_purge

In SQL*Plus 'CONNECT SYS AS SYSDBA' and run below commands

Note: 'DEV' is the prefix of particular environment which we specify while runnning the RCU for particular environment, so change this value as per your environment

Exit from sysdba session

Connect to SOAINFRA DB instance now
sqlplus DEV_SOAINFRA/Welcome1@SOA11DEV

Note: SOA11DEV should be defined in ‘tnsnames.ora’ file possibly located at this location

Load the purge scripts by running the main purge script in theMW_HOME/SOA_ORACLE_HOME/rcu/integration/soainfra/sql/soa_purge directory.
e.g. @soa_purge_scripts.sql

This should create some procedures, functions, views, types and packages under DEV_SOAINFRA schema which required for purge script.

If you want to execute the scripts in debug mode, run common/debug_on.sql

If you want to spool the PLSQL program's output to a log file then set serveroutput on. This would help you understand which are the tables getting purged and also what are the eligible records getting purged.
SQL> spool '/tmp/spool.log'

Run the below code segment to perform 'Looped purge' option using below script
Note: If you want to become familiar with all the parameters used in below SQL code then please go through 10.3.1 Looped Purge Script using this URL

   MAX_CREATION_DATE timestamp;
   MIN_CREATION_DATE timestamp;
   batch_size integer;
   max_runtime integer;
   retention_period timestamp;
   MIN_CREATION_DATE := to_timestamp('2010-01-01','YYYY-MM-DD');
   MAX_CREATION_DATE := to_timestamp('2010-01-31','YYYY-MM-DD');
    max_runtime := 60;
    retention_period := to_timestamp('2010-01-31','YYYY-MM-DD');
   batch_size := 10000;
     min_creation_date => MIN_CREATION_DATE,
     max_creation_date => MAX_CREATION_DATE,
     batch_size => batch_size,
     max_runtime => max_runtime,
     retention_period => retention_period,
     purge_partitioned_component => false);

Highlighted values has to be change accordingly to individual environment requirements. Please refer below table to get detailed understanding of these items.

Beginning creation date for the composite instances.
Ending creation date for the composite instances.
Batch size used to loop the purge. The default value is 20000.
Expiration at which the purge script exits the loop. The default value is 60. This value is specified in minutes.
Retention period is only used by the BPEL process service engine. This functionality is not extended to other components. This parameter checks for and deletes records in the cube_instance table. The value for this parameter must be greater then or equal to max_creation_date. The default value is null. Specify a retention period if you want to retain the composite instances based on the modify_date of the BPEL instances (cube_instance).
In this example, the modify_date of the BPEL instances table, which can be different than the composite created_date, is used as a second level of filtering:
min_creation_date = 1st June 2011
max_creation_date = 30  June 2011
retention_period = 1st July 2011
This deletes all composite instances in which the creation_time of the composite is between 1st June 2011 and 30 June 2011 and the modify_date of the cube_instance is less than 1st July 2011

Stop the logging
SQL> spool off

Verify the generated log file for purging status


OSB Reporting Data Purging

The reporting module of Oracle Service Bus use to display the reporting data for various Business Transactions. In OSB we use 'Report' action to log the messages in Database. OSB maintains two backend tables to store this data namely WLI_QS_REPORT_DATAand WLI_QS_REPORT_ATTRIBUTE . Reporting data also grow overs the period of time and consume lot of disk space. Oracle Service Bus provide out-of-box capability to purge this data in two ways. We have below two options-

1) Purge OSB Reporting Data based on Date Range
2) Purge All OSB Reporting Data

Apart from above out-of-box option, we have few more options as stated below -

3) Write custom PL/SQL procedure or statement to purge this data on regular basis.
4) Use out of box scripts reporting_runtime_drop.sql and reporting_runtime.sql to delete and create tables if outage approved located at this location $OSB_HOME/dbscripts/oracle
In below example I will show to purge based on given date range using out of box purging facility-

Out-of-Box Purging Option

Login to WLS console. Go to " Deployments” and verify below module. It must be active. If it is not active then make this module active.

Login to OSB console. Go to Operation >> Reporting >> Message Report >> Click on “Purge Messages”

Specify date range and time.

Note: The format of date is M/D/YY. Provided date strictly need to follow this format. E.g. if month is Feb, so we can‘t give 02. We must give 2 only. Similarly If date is 1stFeb suppose then we should give 2/1 not 02/01. Year is only two digit only.

SQL to Purge the OSB Report Data

Run the below sql to purge OSB Reporting Data.

delete from WLI_QS_REPORT_DATA where MSG_GUID in (select MSG_GUID from WLI_QS_REPORT_ATTRIBUTE where trunc(DB_TIMESTAMP) < sysdate-60);
delete from WLI_QS_REPORT_ATTRIBUTE where trunc(DB_TIMESTAMP) < sysdate-60

Oracle Notes Reference:-

  1. Automated Methods Of Purging The Reporting Tables In Oracle Service Bus (Doc ID 1080146.1)
  2. OSB Best Practice For Clearing OSB Report DB (Doc ID 1479520.1

Custom Table Data Purging

Apart of above SOA Components data purging, there could be requirement of purging some custom table data which has been created for custom application (if exists)

If that’s the case then below two different PL/SQL approach could be useful to purge that data as well.

Data Purging using Truncate

In below code we will be deleting data for a particular table called ‘UTILITY_LOG’ based on date clause using truncate option. It does perform below tasks-

a)      Count the rows from UTILITY_LOG and print the same
b)      Create UTILITY_LOG_TMP table using UTILITY_LOG table definition based up on search criteria for the data which needs to be retain.
c)       Truncate original UTILITY_LOG table
d)      Copy data back from UTILITY_LOG_TMP to UTILITY_LOG
e)      Drop the UTILITY_LOG_TMP table

set serveroutput on;
CNT NUMBER(10) := 0;
select count(1) into CNT from UTILITY_LOG;
DBMS_OUTPUT.PUT_LINE('Total Rows in UTILITY_LOG table :' || CNT);
DBMS_OUTPUT.PUT_LINE('Creating and Inserting UTILITY_LOG Temporary Table Based on Search Criteria');
DBMS_OUTPUT.PUT_LINE('Inserting rows into UTILITY_LOG from UTILITY_LOG_TMP Temporary Table');
EXECUTE IMMEDIATE 'insert into UTILITY_LOG select * from UTILITY_LOG_tmp';
DBMS_OUTPUT.PUT_LINE('Dropping UTILITY_LOG Temporary Table');
DBMS_OUTPUT.PUT_LINE('UTILITY_LOG Purging has been completed');

Data Purging using simple ‘Delete’ clause

In this example, it’s simply use the standard delete clause to remove the rows from UTILITY_LOG table.

set serveroutput on;
CNT NUMBER(10) := 0;
CNT1 NUMBER(10) := 0;
select count(1) into CNT from UTILITY_CONTROL_REPROCESS1;
DBMS_OUTPUT.PUT_LINE('Total Rows in UTILITY_LOG table :' || CNT);
EXECUTE IMMEDIATE 'Delete from UTILITY_LOG where trunc(CREATED_TIME_STAMP) < sysdate-90';
DBMS_OUTPUT.PUT_LINE('Rows has been deleted from UTILITY_LOG');
select count(1) into CNT1 from UTILITY_LOG;
DBMS_OUTPUT.PUT_LINE('Total Rows in UTILITY_LOG table remains :' || CNT1);
DBMS_OUTPUT.PUT_LINE('UTILITY_LOG Purging has been completed');

B2B Messages Purging

select table_name, num_rows from dba_tables  where owner = 'AIOPRD_SOAINFRA' and table_name like '%B2B_%' and num_rows is not NULL and  num_rows > 0 order by num_rows desc;