Websphere First Failure Data Capture (FFDC) Logs

Posted by Sagar Patil

Often the websphere default systmout,systemerror logs doesn’t provide detailed information on error. In such cases have a look at directory logs under /opt/IBM/WebSphere/AppServer/profiles/Profile01/dmgr/logs/ffdc & /opt/IBM/WebSphere/AppServer/profiles/Profile01/Node/logs/ffdc

There are three property files which control the behavior of the ffdc filter. The files which are used are based upon the state of the server:
1. ffdcStart.properties: used during start of the server
2. ffdcRun.properties: used after the server is ready
3. ffdcStop.properties: used while the server is in the process of stopping

[was61@IBM]$ du -a | grep ffdcRun.properties
./WebSphere/AppServer/properties/ffdcRun.properties

#————————————————————————–
# Enable FFDC processing
#       FFDC=true [default]
#       FFDC=false
#————————————————————————–
FFDC=true
#————————————————————————–
# Level of processing to perform
#       0 – none
#       1 – monitor exception path
#       2 – dump the call stack, with no advanced processing
#       3 – 2, plus object interspecting the current object
#       4 – 2, plus use DM to process the current object
#       5 – 4, plus process the top part of the call stack with DMs
#       6 – perform advanced processing the entire call stack
#————————————————————————–
Level=4
#————————————————————————–
# ExceptionFileMaximumAge, number of days to purge the file
#       ExceptionFileMaximumAge=<any positive number of days>
#                Default is 7 days.
#—————————————————————————
ExceptionFileMaximumAge=7

The only file that you should modify is the ffdcRun.properties file. You can change the value of ExceptionFileMaximumAge property. This property specifies the number of days that an FFDC log remains in the <profileroot>/logs/ffdc directory before being deleted.

There are two artifacts which are produced by FFDC, the information can be located in the <Install Root>/logs/FFDC directory:

  1. * Exception Logs:<ServerName>_Exception.log here mgr_exception.log
  2. * Incident Stream:<ServerName>_<threadid>_<timeStamp>_<SequenceNumber>.txt

exa Exception Logs [was61@IBM]$ du -a | grep exception.log
4       ./WebSphere/AppServer/profiles/Profile01/dmgr/logs/ffdc/dmgr_exception.log
896     ./WebSphere/AppServer/profiles/Profile01/Node/logs/ffdc/server_member2_exception.log
868     ./WebSphere/AppServer/profiles/Profile01/Node/logs/ffdc/server_member1_exception.log
4       ./WebSphere/AppServer/profiles/Profile01/Node/logs/ffdc/nodeagent_exception.log

exa Incident Stream /opt/IBM/WebSphere/AppServer/profiles/Profile01/dmgr/logs
[logs]$ cd ffdc/
[ffdc]$ ls -lrt
-rw-r–r– 1 was61 was61  6607 Aug 14 07:07 dmgr_0000000a_11.08.14_07.07.09_0.txt
-rw-r–r– 1 was61 was61  1082 Aug 14 07:07 dmgr_exception.log
-rw-r–r– 1 was61 was61  5916 Aug 14 07:07 dmgr_00000011_11.08.14_07.07.12_0.txt

We can relate the incident file with the exception.log file by taking the probeid from the incident file and searching for it in the exception.log file. You will notice that timestamps also match.

$vi dmgr_00000011_11.08.14_07.07.12_0.txt

——Start of DE processing—— = [14/08/11 07:07:12:086 BST] , key = java.io.IOException com.ibm.ws.management.discovery.DiscoveryService.sendQuery 165
Exception = java.io.IOException
Source = com.ibm.ws.management.discovery.DiscoveryService.sendQuery
probeid = 165

$vi dmgr_exception.log

Index  Count   Time of last Occurrence   Exception SourceId ProbeId
——+——+—————————+————————–
1      1   14/08/11 07:07:05:200 BST org.omg.CORBA.BAD_OPERATION com.ibm.ws.naming.jndicos.CNContextImpl.isLocal 3510
——+——+—————————+————————–
+    2      1   14/08/11 07:07:08:047 BST com.ibm.websphere.security.EntryNotFoundException com.ibm.ws.security.auth.ContextManagerImpl.runAs 4162
+    3      1   14/08/11 07:07:08:050 BST com.ibm.websphere.wim.exception.EntityNotFoundException com.ibm.websphere.security.EntryNotFoundException 170
+    4      1   14/08/11 07:07:08:064 BST com.ibm.websphere.security.EntryNotFoundException com.ibm.ws.security.role.RoleBasedConfiguratorImpl.fillMissingAccessIds 542
+    5      1   14/08/11 07:07:09:325 BST com.ibm.wkplc.extensionregistry.util.XmlUtilException class com.ibm.wkplc.extensionregistry.RegistryLoader.restore 1
+    6      1   14/08/11 07:07:12:086 BST java.io.IOException com.ibm.ws.management.discovery.DiscoveryService.sendQuery 165

1. Exception Log: Row elements
The exception logs contains all of the exception paths which have been encountered since the server has started. Due to optimizations in the data collection, the table was created to give an over view of the exceptions which have been encountered in the server. A entry in the table look like this :
———————————————————————–
Index  Occur  Time of last Occurence   Exception SourceId ProbeId
———————————————————————–
18      1   11/24/10 15:29:59:893 GMT com.ibm.websphere.security.auth.WSLoginFailedException com.ibm.ws.security.auth.JaasLoginHelper.jaas_login 487
19      8   11/24/10 15:29:23:819 GMT javax.net.ssl.SSLHandshakeException com.ibm.ws.security.orbssl.WSSSLClientSocketFactoryImpl.createSSLSocket 540
20      1   11/24/10 15:29:59:838 GMT com.ibm.websphere.security.PasswordCheckFailedException com.ibm.ws.security.auth.ContextManagerImpl.runAs 4101
21      2   11/24/10 15:29:23:979 GMT com.ibm.websphere.management.exception.ConnectorException com.ibm.ws.management.RoutingTable.Accessor.getConnector 583
——+——+—————————+————————–

The first element in the row is a simply index, this is simply used to determine the number of rows in the table. In some entries, a ‘+’ may appear in the first column, this indicates that the row has been added to the table since the last time the entire table was dunmped.

The second element is the number of occurences. This is useful to see if there is an unusual number of exceptions which are occurring.

The third element in the row, is a time stamp for the last occurence of the exeception. This is useful in looking at exceptions which have occurred at about the same time.

The last element in the row is a combination of values. This consists of the exception name, a source Id and the probe Id. This information is useful to locate information in the incident steam about the specific failure.

File content :  The make up of the file can be a little confusing when first viewed. The file is a accumulation of all of the dumps which have occurred over the life of the server. This means that much of the informaion in the file is out of data, and does not apply to the current server. The most relevent information is the last (tail) of the file.

It is quite easy to locate the last dump of the exception table. The dump will be deliminated by ‘——————-…’. Entries which begin with a ‘+’ appear outside the delimination of the table, and indicate that they are additions to the table since the last time the table was dumped. (Again due to performance concerns, the table is dump only periodically, and when the server is stopping).

The information in the above file is displayed in the unordered form as the hash table. A more viewable form of the file would be to actually sort the output based upon the time stamp. (This is done by using mks commands, hopefully there are available on your system).

Sorted output of only the last dump of the exception table for Server1_Exception.log. This is done by the following command :
tail -n<n> <servername>_exception.log | sort -k4n
where n is the number exceptions in the exception table plus 1 (use the index value to determine this value). <servername> is the name of the server.
Note: The sort key needs a little work for servers which have rolled the data.

2. Incident Stream
The incident stream contains more details about exceptions which have been encountered during the running of the server. Depending on the configuration of the property files, the content of the incident streams will vary.

The default settings of the property files, the incident stream will not contain exception information for exceptions which were encountered during the start of the server (due to the Level=1 in the ffdcStart.properties). But where the server does to ready, and new exeception which is encountered will be processed.

The incident stream files should be used in conjunction of the exception log. The values which are contained in the exception log, in most instances will have a corresponding entry in the incident stream. The relationship between the exception log and the incident stream is the hash code which is made up of the exception type, the source Id, and the probe Id. The simpliest way to look at this information is to use the grep command. The information is not all contained on the same line, if you need to know the exact file containing the value, you can use a compound grep command.

File content  : The file contains information on exception which have been encountered. Each exception will contain information which corresponds to the information (exception name, source Id and the probe Id) contained in the exception table (documented above). If the catch of the exception is a non-static method, the content of the this pointer. In some instances, if there is a diagnostic module which corresponds to the current execution, the DM will write the information about the state of the object to the incident stream.

The call stack will also be written to the incident stream.

In some instances, there may be an exception which was encountered while the server is running which will not produce a call stack. This is because the exception was encountered during the start of the server, and since the server started, the exception is considered to be a normal path exception. All of the exception can be seen by either looking at all of the runtime exceptions, or looking at all of the exceptions.

Websphere FAQ : Clustering, Deployment Manager & Node Agent

Posted by Sagar Patil

How does Deployment Manager and node Agent work together? Does deployment manager send message to node agent actively or node agent sends message to deployment manage?

It’s JMX-based. I suppose it’s pull, because the time interval is specified per Node Agent. When the Node Agent is started, it will discover the Deployment Manager, so it should be pretty direct.

How can I adjust time interval between the node agent and deployment manager?

http://publib.boulder.ibm.com/infoce…chservice.html

Will my application run even if the deployment manager is down.

Yes

Does cluster members maintains IPs of other cluster members and can communicate with each other for the session persistence,including the session back up and retrieval, without the help of the deployment manager?

No, routing to cluster members is done by the plugin at the HTTP server. It has nothing to do with the deployment manager.

Who participate in clustering udder websphere?

For WAS  5 : DM and Node Agent participate in high-availability under WAS 5. Websphere cluster has capability of session maintenance, which means that normally if a cluster member is down, the request session it deals with can be forwarded to other cluster members,which have knowledge of this request session. Hence DM is much needed, If deployment manager do not know cluster member B to which the request is forwarded, cluster member B will not have knowledge of the request session and it might deal with the request incorrectly. So deployment manager should know its cluster members as soon as possible to distribute the session state.

But under WAS 6, DMGR can be stopped. Only admin tasks are not possible anymore if DMGR is down.

What happens if a cluster member is down? The session backup on this cluster member is lost for sure. How long it takes for the owner to detect the failure of the backup server? How can I adjust this time interval?

If a cluster member is down, the plugin routes to another member. As there is no in memory copy of the session, the new server will attempt to retrieve, either from the database or from a replica, depending on how you configured it. There are 2 options, memory to memory and database persistence to store session details.

What heppen if cluster member becomes alive again? How long it takes for other cluster members to detect it and how can I adjust this time?

Other cluster members don’t care.

If deployment manager is down,How can the session backup information transfered to other cluster members? Is web server necessary?

Not necessarily, no. The WAS plugin will automatically fail over requests for a down server to some other server in the same cluster. It’s up to you to configure session persistence so that the session is available to any other server in the cluster. You can use peer clustering or client server clustering, as described here: http://publib.boulder.ibm.com/infoce…ry2memory.html

How cluster session replication is done

Cluster members do not know each other, only web server and plugin know all cluster members. When web server receives a new requests, it will forward it to a cluster member WAS. For the session backup replication and only one replica is used), the web server or the plugin will choose another cluster member WAS and copy and update session state to that cluster member at the same time.
When the owner of the session is down, the web server will forward requests belonging to that session to the a new cluster member. If the new cluster member is the backup cluster member,then it already have the knowledge of the session; if not, then the new cluster member will ask the web server or the plugin to get the information of the session back up cluster member and finally get the session knowledge from that backup cluster member.

Note ;Session replication is not done by the plugin, that is done the data replication service.

Websphere Basics

Posted by Sagar Patil

Basic Definitions:

WebSphere architectures contain one or more computer systems, which are referred to in WebSphere terminology as nodes. Nodes exist within a WebSphere cell. A WebSphere cell can contain one node on which all software components are installed or multiple nodes on which the software components are distributed.

A typical WebSphere cell contains software components that may be installed on one node or distributed over multiple nodes for scalability and reliability purposes. These include the following:

  • A Web server that provides HTTP services
  • A database server for storing application data
  • WebSphere Application Server (WAS) V5

clip_image002

HTTP server
The HTTP server, more typically known as the Web server, accepts page requests from Web browsers and returns Web page content to Web browsers using the HTTP protocol. Requests for Java servlets and JavaServer Pages (JSPs) are passed by the Web server to WebSphere for execution. WebSphere executes the servlet or JSP and returns the response to the Web server, which in turn forwards the response to the Web browser for display.

WebSphere V5 supports numerous Web servers such as Apache, Microsoft IIS, Netscape and Domino. However, WebSphere has the tightest integration with Domino because IBM provides single sign-on capabilities between WebSphere and Domino.

WebSphere plug-in
The WebSphere plug-in integrates with the HTTP Server and directs requests for WebSphere resources (servlets, JSPs, etc.) to the embedded HTTP server (see below). The WebSphere plug-in uses a configuration file called plugin-cfg.xml file to determine which requests are to be handled by WebSphere. As applications are deployed to the WebSphere configuration, this file must be regenerated (typically using the Administration Console) and distributed to all Web servers, so that they know which URL requests to direct to WebSphere. This is one of the few manual processes that a WebSphere administrator must do to maintain the WebSphere environment.

Application server
The application server provides a run-time environment for J2EE applications (supporting servlets, JSPs, Enterprise JavaBeans, etc.). A node can have one or more application server processes. Each application server runs in its own runtime environment called a Java Virtual Machine (JVM). The JVM provides complete isolation (crash protection) for individual applications servers.

Application database
WebSphere applications such as IBM’s commerce and portal products, as well as applications you create yourself, use a relational database for storing configuration information and data. WebSphere V5 ships with the Cloudscape database and supports a wide range of database product, including the following:

  • IBM DB2
  • Informix
  • Oracle
  • SQL Server
  • Sybase

Administration console
The administration console provides a Web-based interface for managing a WebSphere cell from a central location. The administration console can be used to change the configuration of any node within the cell at run-time. Configuration changes are automatically distributed to other nodes in the cell.

Cell:

A Cell is a virtual unit that is built of a Deployment Manager and one or more nodes.

clip_image004

The Deployment Manager is a process (in fact it is an special WebSphere instance) responsible for managing the installation and maintenance of Applications, Connection Pools and other resources related to a J2EE environment. It is also responsible for centralizing user repositories for application and also for WebSphere authentication and authorization.

The Deployment Manager communicates with the Nodes through another special WebSphere process, the Node Agent.

The Node is another virtual unit that is built of a Node Agent and one or more Server instances.

The Node Agent it the process responsible for spawning and killing server processes and also responsible for configuration synchronization between the Deployment Manager and the Node. Extra care must be taken when changing security configurations for the cell, since communication between Deployment Manager and Node Agent is ciphered and secured when security is enabled, Node Agent needs to have configuration fully resynchronized when impacting changes are made to Cell security configuration.

Servers are regular Java process responsible for serving J2EE requests (eg.: serving JSP/JSF pages, serving EJB calls, consuming JMS queues, etc).

Clusters

And to finish, Clusters are also virtual units that groups Servers so resources added to the Cluster are propagated to every Server that makes up the cluster, this will in fact affect usually more than a single Node instance.

clip_image006

Tuning Java virtual Machines

Posted by Sagar Patil

The application server, being a Java process, requires a Java virtual machine (JVM) to run, and to support the Java applications running on it. As part of configuring an application server, you can fine-tune settings that enhance system use of the JVM.

A JVM provides the runtime execution environment for Java based applications. WebSphere Application Server is a combination of a JVM runtime environment and a Java based server runtime. It can run on JVMs from different JVM providers. To determine the JVM provider on which your Application Server is running, issue the java -fullversion command from within your WebSphere Application Server app_server_root/java/bin directory. You can also check the SystemOut.log from one of your servers. When an application server starts, Websphere Application Server writes information about the JVM, including the JVM provider information, into this log file.

From a JVM tuning perspective, there are two main types of JVMs:

* IBM JVMs
* Sun HotSpot based JVMs, including Sun HotSpot JVM on Solaris and HP’s JVM for HP-UX

Even though JVM tuning is dependent on the JVM provider general tuning concepts apply to all JVMs. These general concepts include:

* Compiler tuning. All JVMs use Just In Time (JIT) compilers to compile Java byte codes into native instructions during server run-time.
* Java memory or heap tuning. The JVM memory management function, or garbage collection provides one of the biggest opportunities for improving JVM performance.
* Class loading tuning.

Procedure

* Optimize the startup performance and the runtime performance

In some environments, it is more important to optimize the startup performance of your WebSphere Application Server rather than the runtime performance. In other environments, it is more important to optimize the runtime performance. By default, IBM JVMs are optimized for runtime performance while HotSpot based JVMs are optimized for startup performance.

The Java JIT compiler has a big impact on whether startup or runtime performance is optimized. The initial optimization level used by the compiler influences the length of time it takes to compile a class method and the length of time it takes to start the server. For faster startups, you can reduce the initial optimization level that the compiler uses. This means that the runtime performance of your applications may be degraded because the class methods are now compiled at a lower optimization level.

It is hard to provide a specific runtime performance impact statement because the compilers might recompile class methods during runtime execution based upon the compiler’s determination that recompiling might provide better performance. Ultimately, the duration of the application is a major influence on the amount of runtime degradation that occurs. Short running applications have a higher probability of having their methods recompiled. Long-running applications are less likely to have their methods recompiled. The default settings for IBM JVMs use a high optimization level for the initial compiles. You can use the following IBM JVM option if you need to change this behavior:

-Xquickstart This setting influences how the IBM JVM uses a lower optimization level for class method compiles, which provides for faster server startups, at the expense of runtime performance. If this parameter is not specified, the IBM JVM defaults to starting with a high initial optimization level for compiles. This setting provides faster runtime performance at the expense of slower server starts.

Default: High initial compiler optimizations level
Recommended: High initial compiler optimizations level
Usage: -Xquickstart can provide faster server startup times.

JVMs based on Sun’s Hotspot technology initially compile class methods with a low optimization level. Use the following JVM option to change this behavior:

-server JVMs based on Sun’s Hotspot technology initially compile class methods with a low optimization level. These JVMs use a simple complier and an optimizing JIT compiler. Normally the simple JIT compiler is used. However you can use this option to make the optimizing compiler the one that is used. This change will significantly increases the performance of the server but the server takes longer to warm up when the optimizing compiler is used.

Default: Simple compiler
Recommended: Optimizing compiler
Usage: -server enables the optimizing compiler.

* Set the heap size The following command line parameters are useful for setting the heap size.

* -Xms This setting controls the initial size of the Java heap. Properly tuning this parameter reduces the overhead of garbage collection, improving server response time and throughput. For some applications, the default setting for this option might be too low, resulting in a high number of minor garbage collections

Default: 256 MB
Recommended: Workload specific, but higher than the default.
Usage: -Xms256m sets the initial heap size to 256 megabytes

* -Xmx This setting controls the maximum size of the Java heap. Properly tuning this parameter can reduce the overhead of garbage collection, improving server response time and throughput. For some applications, the default setting for this option is too low, resulting in a high number of minor garbage collections.

Default: 512 MB
Recommended: Workload specific, but higher than the default.
Usage: -Xmx512m sets the maximum heap size to 512 megabytes

* -Xlp This setting can be used with the IBM JVM to allocate the heap using large pages. However, if you use this setting your operating system must be configured to support large pages. Using large pages can reduce the CPU overhead needed to keep track of heap memory and might also allow the creation of a larger heap.

See Tuning operating systems for more information about tuning your operating system.

* The size you should specify for the heap depends on your heap usage over time. In cases where the heap size changes frequently, you might improve performance if you specify the same value for the Xms and Xmx parameters.

* Tune the IBM JVM’s garbage collector.

Use the Java -X option to see the list of memory options.

* -Xgcpolicy Setting gcpolicy to optthruput disables concurrent mark. If you do not have pause time problems, denoted by erratic application response times, you should get the best throughput using this option. Setting gcpolicy to optavgpause enables concurrent mark with its default values. This setting alleviates erratic application response times caused by normal garbage collection. However, this option might decrease overall throughput.

Default: optthruput
Recommended: optthruput
Usage: Xgcpolicy:optthruput

* -Xnoclassgc By default the JVM unloads a class from memory when there are no live instances of that class left, but this can degrade performance. Turning off class garbage collection eliminates the overhead of loading and unloading the same class multiple times.

If a class is no longer needed, the space that it occupies on the heap is normally used for the creation of new objects. However, if you have an application that handles requests by creating a new instance of a class and if requests for that application come in at random times, it is possible that when the previous requester is finished, the normal class garbage collection will clean up this class by freeing the heap space it occupied, only to have to re-instantiate the class when the next request comes along. In this situation you might want to use this option to disable the garbage collection of classes.

Avoid trouble: This option should be used with caution, if your application creates classes dynamically, or uses reflection, because for this type of application, the use of this option can lead to native memory exhaustion, and cause the JVM to throw an Out-of-Memory Exception. When this option is used, if you have to redeploy an application, you should always restart the application server to clear the classes and static data from the pervious version of the application.gotcha

Default: class garbage collection enabled
Recommended: class garbage collection disabled
Usage: Xnoclassgc disables class garbage collection

* Tune the Sun JVM’s garbage collector

On the Solaris platform, the WebSphere Application Server runs on the Sun Hotspot JVM rather than the IBM JVM. It is important to use the correct tuning parameters with the Sun JVM in order to utilize its performance optimizing features.

The Sun HotSpot JVM relies on generational garbage collection to achieve optimum performance. The following command line parameters are useful for tuning garbage collection.

* -XX:SurvivorRatio The Java heap is divided into a section for old (long lived) objects and a section for young objects. The section for young objects is further subdivided into the section where new objects are allocated (eden) and the section where new objects that are still in use survive their first few garbage collections before being promoted to old objects (survivor space). Survivor Ratio is the ratio of eden to survivor space in the young object section of the heap. Increasing this setting optimizes the JVM for applications with high object creation and low object preservation. Since WebSphere Application Server generates more medium and long lived objects than other applications, this setting should be lowered from the default.

Default: 32
Recommended: 16
Usage: -XX:SurvivorRatio=16

* -XX:PermSize The section of the heap reserved for the permanent generation holds all of the reflective data for the JVM. This size should be increased to optimize the performance of applications that dynamically load and unload a lot of classes. Setting this to a value of 128MB eliminates the overhead of increasing this part of the heap.

Recommended: 128 MB
Usage: XX:PermSize=128m sets perm size to 128 megabytes.

* -Xmn This setting controls how much space the young generation is allowed to consume on the heap. Properly tuning this parameter can reduce the overhead of garbage collection, improving server response time and throughput. The default setting for this is typically too low, resulting in a high number of minor garbage collections. Setting this setting too high can cause the JVM to only perform major (or full) garbage collections. These usually take several seconds and are extremely detrimental to the overall performance of your server. You must keep this setting below half of the overall heap size to avoid this situation.

Default: 2228224 bytes
Recommended: Approximately 1/4 of the total heap size
Usage: -Xmn256m sets the size to 256 megabytes.

* -Xnoclassgc By default the JVM unloads a class from memory when there are no live instances of that class left, but this can degrade performance. Turning off class garbage collection eliminates the overhead of loading and unloading the same class multiple times.

If a class is no longer needed, the space that it occupies on the heap is normally used for the creation of new objects. However, if you have an application that handles requests by creating a new instance of a class and if requests for that application come in at random times, it is possible that when the previous requester is finished, the normal class garbage collection will clean up this class by freeing the heap space it occupied, only to have to re-instantiate the class when the next request comes along. In this situation you might want to use this option to disable the garbage collection of classes.

Default: class garbage collection enabled
Recommended: class garbage collection disabled
Usage: Xnoclassgc disables class garbage collection

* Tune the HP JVM’s garbage collector

The HP JVM relies on generational garbage collection to achieve optimum performance. The following command line parameters are useful for tuning garbage collection.

* -Xoptgc This setting optimizes the JVM for applications with many short-lived objects. If this parameter is not specified, the JVM usually does a major (full) garbage collection. Full garbage collections can take several seconds and can significantly degrade server performance.

Default: off
Recommended: on
Usage: -Xoptgc enables optimized garbage collection.

* -XX:SurvivorRatio The Java heap is divided into a section for old (long lived) objects and a section for young objects. The section for young objects is further subdivided into the section where new objects are allocated (eden) and the section where new objects that are still in use survive their first few garbage collections before being promoted to old objects (survivor space). Survivor Ratio is the ratio of eden to survivor space in the young object section of the heap. Increasing this setting optimizes the JVM for applications with high object creation and low object preservation. Since WebSphere Application Server generates more medium and long lived objects than other applications, this setting should be lowered from the default.

Default: 32
Recommended: 16
Usage: -XX:SurvivorRatio=16

* -XX:PermSize The section of the heap reserved for the permanent generation holds all of the reflective data for the JVM. This size should be increased to optimize the performance of applications which dynamically load and unload a lot of classes. Specifying a value of 128 megabytes eliminates the overhead of increasing this part of the heap.

Default: 0
Recommended: 128 megabytes
Usage: -XX:PermSize=128m sets PermSize to 128 megabytes

* -XX:+ForceMmapReserved By default the Java heap is allocated “lazy swap.” This saves swap space by allocating pages of memory as needed, but this also forces the use of 4KB pages. This allocation of memory can spread the heap across hundreds of thousands of pages in large heap systems. This command disables “lazy swap” and allows the operating system to use larger memory pages, thereby optimizing access to the memory making up the Java heap.

Default: off
Recommended: on
Usage: -XX:+ForceMmapReserved will disable “lazy swap”.

* -Xmn This setting controls how much space the young generation is allowed to consume on the heap. Properly tuning this parameter can reduce the overhead of garbage collection, improving server response time and throughput. The default setting for this is typically too low, resulting in a high number of minor garbage collections.

Default: No default
Recommended: Approximately 1/4 of the total heap size
Usage: -Xmn256m sets the size to 256 megabytes

* Virtual Page Size Setting the Java virtual machine instruction and data page sizes to 64MB can improve performance.

Default: 4MB
Recommended: 64MB
Usage: Use the following command. The command output provides the current operating system characteristics of the process executable:

chatr +pi64M +pd64M /opt/WebSphere/
AppServer/java/bin/PA_RISC2.0/
native_threads/java

* -Xnoclassgc By default the JVM unloads a class from memory when there are no live instances of that class left, but this can degrade performance. Turning off class garbage collection eliminates the overhead of loading and unloading the same class multiple times.

If a class is no longer needed, the space that it occupies on the heap is normally used for the creation of new objects. However, if you have an application that handles requests by creating a new instance of a class and if requests for that application come in at random times, it is possible that when the previous requester is finished, the normal class garbage collection will clean up this class by freeing the heap space it occupied, only to have to re-instantiate the class when the next request comes along. In this situation you might want to use this option to disable the garbage collection of classes.

Default: class garbage collection enabled
Recommended: class garbage collection disabled
Usage: Xnoclassgc disables class garbage collection

Webspehere java 100% CPU usage : MustGather Information

Posted by Sagar Patil

Perform the following setup instructions:
1.    Follow instructions to enable verbosegc in WebSphere Application Server

2.    Run the following command:

top -d %delaytime% -c -b > top.log

Where delaytime is the number of seconds to delay. This must be 60 seconds or greater, depending on how soon the failure is expected.

3.    Run the following:

netstat -an > netstat1.out

4.    Run the following:

kill -3 [PID_of_problem_JVM]

The kill -3 command will create javacore*.txt files or javacore data written to the stderr file of the Application Server.
Note: If you are not able to determine which JVM process is experiencing the high CPU usage then you should issue the kill -3 PID for each of the JVM processes.

5.    Wait two minutes. Run the following:

kill -3 [PID_of_problem_JVM]

6.    Wait two minutes. Run the following:

netstat -an > netstat2.out

7.    If you are unable to generate javacore files, then perform the following:

kill -11 [PID_of_problem_JVM]

The kill -11 will terminate the JVM process, produce a core file, and possibly a javacore.

Collect following documentation for uploading to IBM support:

– All Application Server JVM log files for the Application Server experiencing the problem.
– All administrative server log files from the machine experiencing the problem.
– WebSphere Application Server plug-in log
– Web server error and access log
– top.log, ps_eLf.log and vmstat.log
– javacore*.*
– All netstat*.out files
– /var/log/messages
– Indicate which JVM, such as the Application Server or administrative server, is experiencing the problem.

Http Error Codes

Posted by Sagar Patil

Have you ever wondered what the codes listed at Apache acces_log mean

172.21.90.160 – – [05/Jan/2010:08:15:42 +0000] “GET HTTP/1.1” 200 554
172.21.90.160 – – [05/Jan/2010:08:15:42 +0000] “GET  HTTP/1.1” 304
172.21.90.160 – – [05/Jan/2010:08:15:42 +0000] “GET HTTP/1.1” 304 Read more…

Websphere FAQ/Terms Explained

Posted by Sagar Patil
  • · What is a Node?

WebSphere architectures contain one or more computer systems, which are referred to in WebSphere terminology as nodes. Nodes exist within a WebSphere cell. A WebSphere cell can contain one node on which all software components are installed or multiple nodes on which the software components are distributed.

  • · What is a Node agent?

Node agents are administrative agents that route administrative requests to servers.

A node agent is a server that runs on every host computer system that participates in the WebSphere Application Server Network Deployment product. It is purely an administrative agent and is not involved in application serving functions. A node agent also hosts other important administrative functions such as file transfer services, configuration synchronization, and performance monitoring.

  • What is a cluster?

A cluster is a set of application servers that are managed together and participate in workload management. In a distributed environment, you can cluster any of the WebSphere Everyplace Access server components. Each server is installed on a separate node and managed by a Network Deployment node. Cluster members have identical application components, but can be sized differently in terms of weight, heap size, and other environmental factors. The weighted load balancing policies are defined and controlled by the web server plug-in. starting or stopping the cluster automatically starts or stops all the cluster members, and changes to the application are propagated to all server members in the cluster. The servers in clusters share the same database.

  • What is Work Load Management?

Workload management optimizes the distribution of work-processing tasks in the WebSphere Application Server environment. Incoming work requests are distributed to the application servers and other objects that can most effectively process the requests. Workload management also provides failover when servers are not available.

Workload management is most effective when used in systems that contain servers on multiple machines. It also can be used in systems that contain multiple servers on a single, high-capacity machine. In either case, it enables the system to make the most effective use of the available computing resources.

  • What is “dumpNameSpace.sh”?

WebSphere Application Server provides a command line utility for creating a JNDI namespace extract of a dedicated application server. This utility is named dumpNameSpace.sh

  • What is JNDI?

The Java Naming and Directory Interface (JNDI) is part of the Java platform, providing applications based on Java technology with a unified interface to multiple naming and directory services. JNDI works in concert with other technologies in the Java Platform, Enterprise Edition (Java EE) to organize and locate components in a distributed computing environment.

  • What is a JVM?

A Java Virtual Machine (JVM) is a virtual machine that interprets and executes Java bytecode. This code is most often generated by Java language compilers, although the JVM can also be targeted by compilers of other languages. JVM’s may be developed by other companies as long as they adhere to the JVM standard published by Sun.

The JVM is a crucial component of the Java Platform. The availability of JVMs on many types of hardware and software platforms enables Java to function both as middleware and a platform in its own right. Hence the expression “Write once, run anywhere.” The use of the same bytecode for all platforms allows Java to be described as “Compile once, run anywhere”, as opposed to “Write once, compile anywhere”, which describes cross-platform compiled languages.

  • What is JAVA?

Java is an object-oriented language similar to C++, but simplified to eliminate language features that cause common programming errors. Java source code files (files with a .java extension) are compiled into a format called bytecode (files with a .class extension), which can then be executed by a Java interpreter. Compiled Java code can run on most computers because Java interpreters and runtime environments, known as Java Virtual Machines (VMs), exist for most operating systems, including UNIX, the Macintosh OS, and Windows. Bytecode can also be converted directly into machine language instructions by a just-in-time compiler (JIT).

Java is a general purpose programming language with a number of features that make the language well suited for use on the World Wide Web.

  • What is JVM Heap Size?

The Java heap is where the objects of a Java program live. It is a repository for live objects, dead objects, and free memory. The JVM heap size determines how often and how long the VM spends collecting garbage.

  • What is Tivoli Performance Viewer (TPV)?

Tivoli Performance Viewer (TPV) enables administrators and programmers to monitor the overall health of WebSphere Application Server from within the administrative console. By viewing TPV data, administrators can determine which part of the application and configuration settings to change in order to improve performance. For example, you can view the servlet summary reports, enterprise beans, and Enterprise JavaBeans (EJB) methods in order to determine what part of the application to focus on. Then, you can sort these tables to determine which of these resources has the highest response time. Focus on improving the configuration for those application resources taking the longest response time.

For example, you can view the servlet summary reports, enterprise beans, and Enterprise JavaBeans (EJB) methods in order to determine what part of the application to focus on. Then, you can sort these tables to determine which of these resources has the highest response time. Focus on improving the configuration for those application resources taking the longest response time.

  • What does syncNode.sh do?

The syncNode command forces a configuration synchronization to occur between the node and the deployment manager for the cell in which the node is configured. Only use this command when you cannot run the node agent because the node configuration does not match the cell configuration.

  • What does addNode.sh do?

The addNode command incorporates a WebSphere Application Server installation into a cell. You must run this command from the install_root/bin directory of a WebSphere Application Server installation. Depending on the size and location of the new node you incorporate into the cell, this command can take a few minutes to complete.

  • What does removeNode.sh do?

The removeNode command returns a node from a Network Deployment distributed administration cell to a base WebSphere Application Server installation.

The removeNode command only removes the node-specific configuration from the cell. This command does not uninstall any applications that were installed as the result of executing an addNode command. Such applications can subsequently deploy on additional servers in the Network Deployment cell. As a consequence, an addNode command with the -includeapps option executed after a removeNode command does not move the applications into the cell because they already exist from the first addNode command. The resulting application servers added on the node do not contain any applications. To deal with this situation, add the node and use the deployment manager to manage the applications. Add the applications to the servers on the node after it is incorporated into the cell.

The removeNode command does the following:

· Stops all of the running server processes in the node, including the node agent process.

· Removes the configuration documents for the node from the cell repository by sending commands to the deployment manager.

· Copies the original application server cell configuration into the active configuration.

  • What does backupConfig.sh do?

Use the backupConfig utility to back up your WebSphere Application Server V5.0 node configuration to a file. By default, all servers on the node stop before the backup is made so that partially synchronized information is not saved. You can run this utility by issuing a command from the bin directory of a WebSphere Application Server installation or a network deployment installation.

  • What does restoreConfig.sh do?

The restoreConfig command is a simple utility to restore the configuration of your node after backing up the configuration using the backupConfig command. By default, all servers on the node stop before the configuration restores so that a node synchronization does not occur during the restoration. If the configuration directory already exists, it will be renamed before the restoration occurs.

  • What does WASPreUpgrade.sh do?

The WASPreUpgrade command is a migration tool that saves the configuration and applications of a previous version or release to a Version WebSphere Application Server node or Network Deployment node.

  • What does WASPost Upgrade.sh do?

The WASPostUpgrade command is a migration tool for adding the configuration and applications of a previous version or release to the current WebSphere Application Server node. The configuration includes migrated applications. The tool adds all migrated applications into the install_root/installedApps directory of the current product. The tool locates the saved configuration that the WASPreUpgrade tool saves through a parameter you use to specify the backup directory.

  • What is a thread?

A thread can be loosely defined as a separate stream of execution that takes place simultaneously with and independently of everything else that might be happening. A thread is like a classic program that starts at point A and executes until it reaches point B. It does not have an event loop. A thread runs independently of anything else happening in the computer. Without threads an entire program can be held up by one CPU intensive task or one infinite loop, intentional or otherwise. With threads the other tasks that don’t get stuck in the loop can continue processing without waiting for the stuck task to finish.

It turns out that implementing threading is harder than implementing multitasking in an operating system. The reason it’s relatively easy to implement multitasking is that individual programs are isolated from each other. Individual threads, however, are not.

  • What is multithreading?

The ability of an operating system to execute different parts of a program, called threads, simultaneously is called multithreading.

  • What is initial context?

All naming operations are relative to a context. The initial context implements the Context interface and provides the starting point for resolution of names.

  • What is Web Container thread pool size?

This value limits the number of requests that your application server can process concurrently.

  • What are the algorithms used for Work Load Management?

WebSphere supports four specified load-balancing policies:

  1. Round robin
  2. Random
  3. Round robin prefer local
  4. Random prefer local.

As implied, the last two always select a stub that connects to a local clone, if one is available. The first two apply a round robin or random selection algorithm without consideration of the location of associated clone.

  • How do we increase the JVM heap size?

1. In the administrative console Servers > Application Servers > server name > Process Definition > Java Virtual Machine.

2. It can also be increased in the startServer.sh file

  • What are JNDI names and how are they related to the Application Server?

Java Naming and Directory Interface (JNDI) is a naming service that allows a program or container to register a “popular” name that is bound to an object. When a program wishes to lookup a name, it contacts the naming server, through a well-known port, and provides the public name, perhaps with authorization information. The naming server returns the object or, in some cases, a stub that can be used to interact with the object.

A JNDI server runs as part of the WebSphere environment. When the container is initiated, it loads the various applications deployed within it. Part of that process involves opening their respective EAR files and, in turn, their JAR files. For EJB container objects, such as entity and session EJBs, they are registered with the local JNDI server. Their public names are derived from their deployment descriptors or as a default value based on the class name. Once the EJB container is operational, the objects within it will be available through the associated JNDI server.

  • What are maximum beans in a pool?

When an EJB has been in the free pool for the number of seconds specified in Idle Timeout, and the total number of beans in the free pool approaches the maximum beans in free pool specified in this field, idle beans are removed from the free pool.

  • What is plugin-cfg.xml?

The console operation generates a cell-level plug-in configuration file containing entries for all application servers and clusters on all machines in the cell. he Web server plug-in is installed on the Web server machine, but the configuration file (plugin-cfg.xml) for the plug-in is generated via WebSphere and then moved to the appropriate location on the Web server.

  • How do we debug and error if the customer complaints that he is not able to see the login page?
  1. Traceroute from the client to the server(Pinging the Web Server)
  2. Check for the system statistics(to which the particular request was sent, top, iostat,vmstat,netstat)
  3. Server logs(systemerr.log, activity.log)
  4. If logs do not show any info, take threaddump thrice within 5 minutes
  5. Heapdump (Use Heap Analser…Contains all the objects running in the JVM)
  • What is a WebSphere Plugin?

The WebSphere plug-in integrates with the HTTP Server and directs requests for WebSphere resources (servlets, JSPs, etc.) to the embedded HTTP server (see below). The WebSphere plug-in uses a configuration file called plugin-cfg.xml file to determine which requests are to be handled by WebSphere. As applications are deployed to the WebSphere configuration, this file must be regenerated (typically using the Administration Console) and distributed to all Web servers, so that they know which URL requests to direct to WebSphere. This is one of the few manual processes that a WebSphere administrator must do to maintain the WebSphere environment.

  • Compare WAS 4.0 / 5.0 / 6.0 ?

Specialities of WAS 5.0 over 4.0

  1. Full J2EE 1.3 support and support for the Java SDK 1.3.1.
  2. A new administrative model based on the Java Management Extensions(JMX) framework and an XML-based configuration repository. A relational database is no longer required for the configuration repository.
  3. A Web-based administrative console provides a GUI interface for administration.
  4. An interface based on the Bean Scripting Framework, wsadmin, has been provided for administration through scripts. In V5, the only supported scripting language is JACL.
  5. Clustering, workload management, and single point of administration in a multi-node single cell topology
  6. SOAP/JMS support.
  7. Support for Jython scripting language support in wsadmin
  8. In a Network Deployment environment, the application server can now start without the Node Agent running.

Specialities of v6.0 over v5.0

J2EE 1.4 support

  1. WebSphere Application Server V6 provides full support for J2EE specification requires a certain set of specifications to be supported. These are EJB 2.1, JMS 1.1, JCA 1.5, Servlet 2.4, and JSP 2.0. WebSphere Application Server V6 also provides support for J2EE 1.2 and 1.3 to ease migration.
  2. Mixed cell support enables you to migrate an existing WebSphere Application Server V5 Network Deployment environment to V6. By migrating the Deployment Manager to V6 as a first step, you can continue to run V5 application servers until you can migrate each of them.
  3. Configuration archiving allows you to create a complete or partial archive of an existing WebSphere Application Server configuration. This archive is portable and can be used to create new configurations based on the archive.
  4. Defining a WebSphere Application Server V6 instance by a profile allows you to easily configure multiple runtimes with one set of install libraries. After installing the product, you create the runtime environment by building profiles.
  5. Defining a generic server as an application server instance in the administration tools allows you to associate it with a non-WebSphere server or process that is needed to support the application server environment.
  6. By defining external Web servers as managed servers, you can start and stop the Web server and automatically push the plug-in configuration to it. This requires a node agent to be installed on the machine and is typically used when the Web server is behind a firewall
  7. You can also define a Web server as an unmanaged server for placement outside the firewall. This allows you to create custom plug-ins for the Web server, but you must manually move the plug-in configuration to the Web server machine.
  8. As a special case, you can define the IBM HTTP server as an unmanaged server, but treat it as a managed server. This does not require a node agent because the commands are sent directly to the IBM HTTP server administration process.
  9. You can use node groups to define a boundary for server cluster formation. With WebSphere Application Server V6, you can now have nodes in cells with different capabilities, for example, a cell can contain both WebSphere Application Server on distributed systems and on z/OS. Node groups are created to group nodes of similar capability together to allow validation during system administration processes.
  10. The Tivoli Performance View monitor has also been integrated into the administrative console.
  11. Enhanced Enterprise Archive (EAR) files can now be built using Rational Application Developer or the Application Server Toolkit. The Enhanced EAR contains bindings and server configuration settings previously done at deployment time. This allows developers to predefine known runtime settings and can speed up deployment.
  12. Fine grain application update capabilities allow you to make small delta changes to applications without doing a full application update and restart.
  13. WebSphere Rapid Deployment provides the ability for developers to use annotation based programming. This is step forward in the automation of application development and deployment.
  14. Failover of stateful session EJBs is now possible. Each EJB container provides a method for stateful session beans to fail over to other servers. This feature uses the same memory to memory replication provided by the data replication services component used for HTTP session persistence.
  • What if the thread is stuck?

You get to know if a thread is stuck by taking thread dump. If a thread is stuck, you should take the heap dump to know exactly at which object thread is stuck & let developer know about object creating problems.

WebSphere Log Files /Logging performance data with TPV

Posted by Sagar Patil

Plug-In Logs
WebServer http Plugin will create log, by default named as http-plugin.log, placed under PLUGIN_HOME/logs/
Plugin writes Error messages into this log. The attribute which deals with this is
< Log > in the plugin-cfg.xml
For Example
< Log LogLevel=”Error” Name=”/opt/IBM/WebSphere/Plugins/logs/http_plugin.log” / >

To Enable Tracing set Log LogLevel to “Trace”.
< Log LogLevel=”Trace” Name=”/opt/IBM/WebSphere/Plugins/logs/http_plugin.log” / >

JVM logs
$ find /opt/IBM/WebSphere/ -name SystemOut.log -print
/opt/IBM/WebSphere/AppServer/profiles/%Profile%/Node/logs/member1/SystemOut.log
/opt/IBM/WebSphere/AppServer/profiles/%Profile%/Node/logs/member2/SystemOut.log
/opt/IBM/WebSphere/AppServer/profiles/%Profile%/Node/logs/nodeagent/SystemOut.log
/opt/IBM/WebSphere/AppServer/profiles/%Profile%/Dmgr/logs/Dmgr/SystemOut.log

NodeAgent Process Log
/opt/IBM/WebSphere/AppServer/profiles/%Profile%/Node/logs/nodeagent/native_stdout.log
/opt/IBM/WebSphere/AppServer/profiles/%Profile%/Node/logs/nodeagent/native_stderr.log

IBM service logs – activity.log
/opt/IBM/WebSphere/AppServer/profiles/%Profile%/Node/logs/activity.log
/opt/IBM/WebSphere/AppServer/profiles/%Profile%/Dmgr/logs/activity.log

——————————————————————————–

Enabling automated heap dump generation, DONT DO THIS IN PRODUCTION

  1. Click Servers > Application servers in the administrative console navigation tree.
  2. Click server_name >Performance and Diagnostic Advisor Configuration.
  3. Click the Runtime tab.
  4. Select the Enable automatic heap dump collection check box.
  5. Click OK

Locating and analyzing heap dumps
Goto profile_root\myProfile. IBM heap dump files are usually named as heapdump*.phd

Download and use tools like heapAnalyzer, dumpanalyzer

——————————————————————————–

Logging performance data with TPV(Tivoli Performance Viewer)

    1. Click Monitoring and Tuning > Performance Viewer > Current Activity > server_name > Settings > Log in the console navigation tree. To see the Log link on the Tivoli Performance Viewer page, expand the Settings node of the TPV navigation tree on the left side of the page. After clicking Log, the TPV log settings are displayed on the right side of the page.
    2. Click on Start Logging when viewing summary reports or performance modules.
    3. When finished, click Stop Logging . Once started, logging stops when the logging duration expires, Stop Logging is clicked, or the file size and number limits are reached. To adjust the settings, see step 1.

    By default, the log files are stored in the profile_root/logs/tpv directory on the node on which the server is running. TPV automatically compresses the log file when it finishes writing to it to conserve space. At this point, there must only be a single log file in each .zip file and it must have the same name as the .zip file.

  • View logs.
    1. Click Monitoring and Tuning > Performance Viewer > View Logs in the console navigation tree.
    2. Select a log file to view using either of the following options:
      Explicit Path to Log File
      Choose a log file from the machine on which the browser is currently running. Use this option if you have created a log file and transferred it to your system. Click Browse to open a file browser on the local machine and select the log file to upload.
      Server File
      Specify the path of a log file on the server.In a stand-alone application server environment, type in the path to the log file. The profile_root\logs\tpv directory is the default on a Windows system.
    3. Click View Log. The log is displayed with log control buttons at the top of the view.
    4. Adjust the log view as needed. Buttons available for log view adjustment are described below. By default, the data replays at the Refresh Rate specified in the user settings. You can choose one of the Fast Forward modes to play data at rate faster than the refresh rate.
      Rewind Returns to the beginning of the log file.
      Stop Stops the log at its current location.
      Play Begins playing the log from its current location.
      Fast Forward Loads the next data point every three (3) seconds.
      Fast Forward 2 Loads ten data points every three (3) seconds.

    You can view multiple logs at a time. After a log has been loaded, return to the View Logs panel to see a list of available logs. At this point, you can load another log.

    TPV automatically compresses the log file when finishes writing it. The log does not need to be decompressed before viewing it, though TPV can view logs that have been decompressed.

Websphere Hierarchy of Configuration Documents

Posted by Sagar Patil

Hierarchy of directories of documents

In a Network Deployment environment, changes made to configuration documents in the cell repository, are automatically replicated to the same configuration documents that are stored on nodes throughout the cell.

At the top of the hierarchy is the cells directory. It holds a subdirectory for each cell. The names of the cell subdirectories match the names of the cells. For example, a cell named cell1 has its configuration documents in the subdirectory cell1.

An example file structure is as follows:

  • Each cell subdirectory has the following files and subdirectories:
  • The cell.xml file provides configuration data for the cell.Files such as security.xml, virtualhosts.xml, resources.xml, and variables.xml provide configuration data that applies across every node in the cell
  • Each cluster subdirectory holds a cluster.xml file, which provides configuration data specifically for that cluster.
  • The nodes subdirectory holds a subdirectory for each node in the cell.
    The names of the nodes subdirectories match the names of the nodes.Each node subdirectory holds files such as variables.xml and resources.xml, which provide configuration data that applies across the node.
  • Each server subdirectory holds a server.xml file, which provides configuration data specific to that server.
    Server subdirectories might hold files such as security.xml, resources.xml and variables.xml, which provide configuration data that applies only to the server. The configurations specified in these server documents override the configurations specified in containing cell and node documents having the same name.
  • The applications subdirectory, holds a subdirectory for each application deployed in the cell.
    The names of the applications subdirectories match the names of the deployed applications.Each deployed application subdirectory holds a deployment.xml file that contains configuration data on the application deployment. Each subdirectory also holds a META-INF subdirectory that holds a Java 2 Platform, Enterprise Edition (J2EE) application deployment descriptor file as well as IBM deployment extensions files and bindings files. Deployed application subdirectories also hold subdirectories for all .war and entity bean .jar files in the application. Binary files such as .jar files are also part of the configuration structure.
cells
  cell1
     cell.xml resources.xml virtualhosts.xml variables.xml security.xml
     nodes
        nodeX
           node.xml variables.xml resources.xml serverindex.xml
           serverA
              server.xml variables.xml
           nodeAgent
              server.xml variables.xml
        nodeY
           node.xml variables.xml resources.xml serverindex.xml
     applications
        sampleApp1
           deployment.xml
           META-INF
              application.xml ibm-application-ext.xml ibm-application-bnd.xml
        sampleApp2
           deployment.xml
           META-INF
              application.xml ibm-application-ext.xml ibm-application-bnd.xml

	

WebSphere Configuration Files

Posted by Sagar Patil

Application server configuration files define the available application servers, their configurations, and their contents.

A configuration repository stores configuration data.  Configuration repositories reside in the config subdirectory of the profile root directory.

A cell-level repository stores configuration data for the entire cell and is managed by a file repository service that runs in the deployment manager.

The deployment manager and each node have their own repositories. A node-level repository stores configuration data that is needed by processes on that node and is accessed by the node agent and application servers on that node.

The master repository is comprised of following .xml configuration files

Configuration file Locations Purpose Manual editing required
admin-authz.xml
config/cells/
cell_name/
Define a role for administrative operation authorization.
app.policy
config/cells/
cell_name/
nodes/node_name/
Define security permissions for application code. X
cell.xml
config/cells/
cell_name/
Identify a cell.
cluster.xml
config/cells/
cell_name/
clusters/
cluster_name/
Identify a cluster and its members and weights.This file is only available with the Network Deployment product.
deployment.xml
config/cells/
cell_name/
applications/
application_name/
Configure application deployment settings such as target servers and application-specific server configuration.
filter.policy
config/cells/
cell_name/
Specify security permissions to be filtered out of other policy files. X
integral-jms-authorizations.xml
config/cells/
cell_name/
Provide security configuration data for the integrated messaging system. X
library.policy
config/cells/
cell_name/
nodes/node_name/
Define security permissions for shared library code. X
multibroker.xml
config/cells/
cell_name/
Configure a data replication message broker.
namestore.xml
config/cells/
cell_name/
Provide persistent name binding data. X
naming-authz.xml
config/cells/
cell_name/
Define roles for a naming operation authorization. X
node.xml
config/cells/
cell_name/
nodes/node_name/
Identify a node.
pmirm.xml
config/cells/
cell_name/
Configure PMI request metrics X
resources.xml
config/cells/
cell_name/
config/cells/
cell_name/
nodes/node_name/

config/cells/
cell_name/
nodes/node_name/
servers/
server_name/

Define operating environment resources, including JDBC, JMS, JavaMail, URL, JCA resource providers and factories.
security.xml
config/cells/
cell_name/
Configure security, including all user ID and password data.
server.xml
config/cells/
cell_name/
nodes/
node_name/
servers/
server_name/
Identify a server and its components.
serverindex.xml
config/cells/
cell_name/
nodes/
node_name/
Specify communication ports used on a specific node.
spi.policy
config/cells/
cell_name/
nodes/
node_name/
Define security permissions for service provider libraries such as resource providers. X
variables.xml
config/cells/
cell_name/
config/cells/
cell_name/
nodes/
node_name/
 config/cells/
cell_name/
nodes/node_name/
servers/
server_name/
Configure variables used to parameterize any part of the configuration settings.
virtualhosts.xml
config/cells/
cell_name/
Configure a virtual host and its MIME types.

You can edit configuration files using the administrative console, scripting, wsadmin commands, programming, or by editing a configuration file directly.

Administrating WebSphere : Start/Stop/Status, Kill Sessions

Posted by Sagar Patil

Check Server Status:

$WAS_HOME/profiles/Profile01/Node01/bin/serverStatus  server1(JVM NAME)
serverStatus -all (returns status for all defined servers)
serverStatus -trace (produces the serverStatus.log file)

Stop WebSphere :

ps -eaf | grep java
i.e. Dmgr ,nodeagent, prod_server_member2,prod_server_member4

sudo -u %was_user% -i
cd /opt/IBM/WebSphere/AppServer/profiles/Profile01/Nodes/bin
./stopServer.sh prod_server_member2
./stopServer.sh prod_server_member4
./stopNode.sh

cd /opt/IBM/WebSphere/AppServer/profiles/Profile01/Dmgr/bin
./stopServer.sh Dmgr

Start WebSphere : Check to see if java processes do not exist

cd /opt/IBM/WebSphere/AppServer/profiles/Profile01/Dmgr/bin
./startServer.sh Dmgr

cd /opt/IBM/WebSphere/AppServer/profiles/Profile01/Nodes/bin
./startNode.sh
./startServer.sh prod_server_member2
./startServer.sh prod_server_member4

ps -eaf | grep java (check to see if java processes do not exist)
i.e. Dmgr, nodeagent, prod_server_member2, prod_server_member4

To get just pid for killing processes use $ ps -ef | grep java | grep dev_server_member2 | awk ‘{print $2}’
8867
8880

Using apachectl commands to start IBM HTTP Server

$sudo -u was61 -i
/opt/IBM/HTTPServer/bin/apachectl start
/opt/IBM/HTTPServer/bin/apachectl stop

$sudo /opt/IBM/HTTPServer/bin/apachectl start

To list all the jvm process that websphere is running..

1. ps-ef | grep <path to websphere java>
ps -ef | grep /<was_root>/java
wasadm 18445 18436 0 13:48:33 pts/9 0:00 grep <was_root>/java
wasadm 9959 1 0 Feb 18 ? 4:17 <was_root>/java/bin/java -XX:MaxPermSize=256m -Dwas.status.socket=49743 -X
wasadm 9927 1 0 Feb 18 ? 5:10 <was_root>/java/bin/java -XX:MaxPermSize=256m -Dwas.status.socket=49611 -X

2. pgrep -f -u $WASUSER $ENVPATH

Log File locations

Httpd Logs : /opt/IBM/HTTPServer/logs

WAS logs :
[was61@bin]$ ls -l $WAS_HOME/profiles/Profile01/Node01/logs
total 2092
-rw-r–r– 1 was61 web 2097152 Jan 12 10:28 activity.log
drwxr-xr-x 2 was61 web    4096 Jan 12 09:55 dev_server_member1
drwxr-xr-x 2 was61 web    4096 Jan 11 16:20 dev_server_member2
drwxr-xr-x 2 was61 web   28672 Jan 12 10:10 ffdc
drwxr-xr-x 2 was61 web    4096 Jan  8 16:31 nodeagent

$WAS_HOME/profiles/Profile01/Node01/logs/nodeagent/:
total 1116
-rw-r–r– 1 was61 web      83 Jan 11 16:57 monitor.state
-rw-r–r– 1 was61 web   11534 Jan 11 16:57 native_stderr.log
-rw-r–r– 1 was61 web   11400 Jan  8 16:31 native_stdout.log
-rw-r–r– 1 was61 web       0 Jan  8 16:31 nodeagent.pid
-rw-r–r– 1 was61 web   12288 Jan  8 16:31 startServer.log
-rw-r–r– 1 was61 web   15491 Jan  8 16:24 stopServer.log
-rw-r–r– 1 was61 web   11400 Jan  8 16:31 SystemErr.log
-rw-r–r– 1 was61 web 1048525 Jan  8 15:26 SystemOut_10.01.08_15.27.29.log
-rw-r–r– 1 was61 web   17125 Jan 11 16:57 SystemOut.log

Troubleshooting OAS Instance

Posted by Sagar Patil

How to start Oracle Application Server?

Starting an application server instance

a) login to server with oracle Userid
b) set the environment
c) cd $ORACLE_HOME/opmn/bin
d) opmnctl startall (to start)

How to stop Oracle Application Server?

Stopping an application server instance

a) login to server with oracle Userid
b) set the environment
c) cd $ORACLE_HOME/opmn/bin
d) opmnctl stopall (to stop)

How to query the status of an Oracle Application Server?

Status of an application server instance

a) login to server with oracle Userid
b) set the environment
c) cd $ORACLE_HOME/opmn/bin
d) opmnctl status

How to start/stop/restart a component of Oracle Application Server?

a) login to server with oracle Userid
b) set the environment
c) cd $ORACLE_HOME/opmn/bin
d) opmnctl startproc ias-component=HTTP_Server (to start HTTP server)
e) opmnctl stopproc ias-component=HTTP_Server (to stop HTTP server)
f) opmnctl restartproc ias-component=HTTP_Server (to restart HTTP server)

Where are the important OAS configuration files?

Oracle Application Server comprises of various components which together provide 3-tier application deployment environment. Following are the important configuration file in various components.

Oracle HTTP Server

$ORACLE_HOME/Apache/Apache/conf/httpd.conf

Oracle AS Forms Services

$ORACLE_HOME/forms/server/forms.conf
$ORACLE_HOME/forms/server/formsweb.cfg
$ORACLE_HOME/forms/server/default.env
$ORACLE_HOME/forms/java/oracle/forms/registry/Registry.dat

Oracle AS Report Services

$ORACLE_HOME/reports/conf/rep_hostname.conf

Oracle AS Web Cache

$ORACLE_HOME/Webcache/Webcache.xml

How to deploy forms to Oracle Application Server?

a) FTP the forms to Oracle Application Server

b) Create the following directory structure under /opt/forms/ORACLE_SID

i. Config – for any application configuration files

ii. Logfiles – for log files

iii. Source – for application source code (*.fmb, *.pll, *.mmb)

iv. Executables – for compiled application code (*.fmx, *.plx etc)

v. Images – for application images

vi. Reports – for reports

c) Compile forms (*.fmb), libraries (*.pll) and menus (*.mmb)

(Refer to how to compile a form for more information)

d) Modify default.env, reports.sh and rep_hostname.conf application server configuration files

$ORACLE_HOME/forms/server/default.env – ADD /opt/forms/ORACLE_SID/executables directory to FORMS_PATH & CLASS_PATH. ADD /opt/forms/ORACLE_SID/images directory to FORMS_PATH.

$ORACLE_HOME/bin/reports.sh – ADD /opt/forms/ORACLE_SID/reports directory to REPORTS_PATH.

$ORACLE_HOME/reports/conf/rep_hostname.conf – ADD /opt/forms/ORACLE_SID/reports directory to SourceDir XML tag.

e) Restart the application server.

How to compile a form on UNIX environment?

I have written 6 shell scripts to compile various forms and reports components

Compile.sh – To compile a form

Compileall.sh – To compile all forms

Compilepll.sh – To compile a library

Compileallpll.sh – To compile all libraries

Compilemenu.sh – To compile a menu

Compileallmenu.sh – To compile all menus

a) Set the DISPLAY variable (e.g. DISPLAY=172.17.21.167:0.0; export DISPLAY)

b) cd /opt/forms/ORACLE_SID/forms

c) ./compile.sh form_name (to compile a form) or ./compileall.sh (to compile all forms)

What are the important log files on Oracle Application Server?

The trace and diagnostic information is distributed across a number of different places in oracle application server 3-tier architecture.

Jinitiator Trace File

The first place to look for trace information is on the client machine. Jinitiator is a Java Virtual Machine (JVM) responsible for running the Forms Java client. Setting the following Java runtime parameters in the Jinitiator Control Panel will enable Jinitiator tracing.

-Djavaplugin.trace=true

-Djavaplugin.trace.option=basic|net|security|ext|liveconnect.

This will produce a trace file, Jinitiator<version>.trace in the User Home Directory. For example, if the User Home Directory is C:\Documents and Settings\username and Jinitiator version1.3.1.9 is used, the file C:\Documents and Settings\username\jinitiator1319.trace will be produced.

Oracle HTTP Server Access_log and Error_log

 

When a Forms application is running on the Web, the HTTP Server is responsible for the transmission of metadata, between the Forms client and the Forms runtime, as standard HTTP(S) messages.

The HTTP Server is enabled by default to log basic information about all HTTP

requests to the access_log file and report errors in the error_log file. It is possible to increase the logging level on the HTTP Server using the LogLevel directive in httpd.conf. However, the default basic logging information is sufficient to troubleshoot the Forms Listener Servlet and detailed logging is not required

Oracle HTTP Server Access_log and Error_log are found in the following

directory on the application server:

$ORACLE_HOME/Apache/Apache/logs

OC4J_BI_Forms Application.log

Finally, the Forms Listener Servlet is responsible for marshalling the communication between the individual Forms clients and their corresponding Forms runtime processes. Basic diagnostic information for this component is found in the application.log when the Forms Listener Servlet is used in the default mode.

Detailed diagnostic information can be obtained from the application.log by enabling debugging in the Forms Listener Servlet as follows:

– appending the /debug option to serverURL in the formsweb.cfg file

Eg: serverURL=/forms/lservlet/debug

The application.log is found in the following locations:

$ORACLE_HOME/j2ee/OC4J_BI_Forms/formsapp/OC4J_BI_Forms_default_island_1/

How to configure WEBUTIL on Oracle Application Server?

a) Download webutil version 1.0.6 from the following location http://www.oracle.com/technology/software/products/forms/index.html

b) After you have downloaded and extracted, the WebUtil directory structure has these folders:

doc
java
server
webutil

c) Webutil.pll, Webutil.olb and the create_webutil_db.sql exist in the Forms directory. When you extract the WebUtil Zip file, its contents are extracted into the ORACLE_HOME\forms folder. All files will be copied to the respective directories in the ORACLE_HOME\forms.

d) Create ‘webutil’ user on your database and execute create_webutil_db.sql script. This script creates WEBUTIL_DB package, create a public synonym of this package to make it available for everyone.

e) Add the following parameter to $ORACLE_HOME/forms/server/formsweb.cfg file

webUtilArchive=/forms/java/frmwebutil.jar, /forms/java/jacob.jar

baseHTMLjinitiator=webutiljini.htm

baseHTMLjpi=webutiljpi.htm

baseHTML=webutilbase.htm

f) Recompile webutil.pll, use the following command:

g) Frmcmp.sh module=$ORACLE_HOME/forms/webutil.pll userid=<webutil/webutil@dbconnect> module_type=library compile_all=yes

h) Do the following to deploy Jacob.jar file

1. Download http://prdownloads.sourceforge.net/jacob-project/jacob_18.zip

2. From the JACOB Zip file, extract both jacob.dll and jacob.jar into the ORACLE_HOME\forms\WebUtil and ORACLE_HOME\forms\java directories respectively.

3. You need to sign both frmwebutil.jar and jacob.jar with the same digital certificate. This is a one-time operation which allows your end-users to trust that the JACOB routines can access client side resources. If you do not have an existing signing certificate, or if you are not sure how to sign Jar files, a script is in the forms\WebUtil directory to help you. This script is called sign_webutil.sh

To sign the Jar files:

Check that ORACLE_HOME/jdk/bin is in the path. If it is not, add it.

Issue sign_webutil.sh $ORACLE_HOME/forms/java/frmwebutil.jar to sign frmwebutil.jar file and sign_webutil.sh $ORACLE_HOME/forms/java/jacob.jar to sign jacob.jar

Top of Page

Top menu