Tuesday, September 28, 2010

Tips to reduce WebLogic Startup Time

This is going to be a living document on how to reduce weblogic startup times.

1) Apply -D_Offline_FileDataArchive=true

The WebLogic Diagnostics Framework creates the diagnostic store (DAT) file which is used for different purposes. The initial size of the store file is about 1M.

Here are the details of the parameter suggested: -D_Offline_FileDataArchive=true

The above parameter is a (undocumented and unsupported) system property that disables WLDF indexing on the log file.

Engineering has delivered an undocumented system property to disable the file archive indexing task (-D_Offline_FileDataArchive=true),
This property will disable incremental as well as full indexing of WLDF archiving, if property is not properly applied, you can see a File Indexer timer kicks in for every 30 seconds.

The parameter is discussed in the bug# 8101514, where customer has encountered High CPU usage because of this indexing and it has resolved couple of Production issues as well.

The indexing takes place when ever we create a WLDF module.

URL for WLDF:
http://download-llnw.oracle.com/docs/cd/E13222_01/wls/docs100/wldf_configuring/index.html

2) WLW page-flow event reporter may be creating several entries in the diagnostics store

apply -Dcom.bea.wlw.netui.disableInstrumentation=true

3) Disable JDBC profiling. (Under JDBC/datasource_name/diagnostics uncheck all the options there)

If JDBC profiling is turned on, it inserts profiling data into the diagnostic store, which can cause it to grow.

URL for JDBC profiling:
http://download-llnw.oracle.com/docs/cd/E13222_01/wls/docs100/ConsoleHelp/pagehelp/JDBCjdbcdatasourcesjdbcdatasourceconfigdiagnosticstitle.html

4) Disable Harvesting (change the value for Profile Harvest Frequency Seconds to zero)

5) Try setting the "synchronous-write-policy" parameter in the server configuration to "Cache-Flush". This bypasses the "direct I/O" code in the file store.

Servers--->Configuration--->Services--->Default Store you can set the synchronous-write-policy to Cache-Flush

Synchronous Write Policy:

The disk write policy that determines how the file store writes data to disk.

This policy also affects the JMS file store's performance, scalability, and reliability. The valid policy options are:

* Direct-Write

File store writes are written directly to disk. This policy is supported on Solaris, HP-UX, and Windows. On Windows systems, this option generally performs faster than the Cache-Flush option writes.

* Disabled

Transactions are complete as soon as their writes are cached in memory, instead of waiting for the writes to successfully reach the disk. This policy is the fastest, but the least reliable (that is, transactionally safe). It can be more than 100 times faster than the other policies, but power outages or operating system failures can cause lost and/or duplicate messages.

* Cache-Flush

Transactions cannot complete until all of their writes have been flushed down to disk. This policy is reliable and scales well as the number of simultaneous users increases.

6)
If you are using jrockit under GNU/Linux, you might be hitting a bug. Inside jrockit installation directory, jre/lib/security, edit java.security and locate the line that defines the random source:

securerandom.source=file:/dev/urandom

Comment it out, and replace it by:

securerandom.source=file:/dev/./urandom

7)
You can also break out the JMS servers from the instance and allocate them into a seperate managed server - WLS would have to read the JMS file store on every start
Or compact the JMS file store regularly to keep its size small

No comments: