Tuesday, December 21, 2010

Identify the Java process hogging all the CPU

Identify the Java process hogging all the CPU

To do that, use the top or prstat command to get that process ID

$ top
last pid: 25837; load averages: 0.06, 0.15, 0.36 16:14:18
73 processes: 72 sleeping, 1 on cpu
CPU states: 0.0% idle, 99.1% user, 0.9% kernel, 0.0% iowait, 0.0% swap
Memory: 4096M real, 1191M free, 2154M swap in use, 1306M swap free

PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
1864 rdadmin 53 59 0 184M 88M sleep 7:38 98.10% java
27794 iwui 39 59 0 222M 186M sleep 882:38 0.42% java
27708 iwui 71 59 0 52M 34M sleep 82:40 0.06% java
24025 root 29 29 10 232M 146M sleep 716:03 0.04% java
23449 rdadmin 1 59 0 2616K 2112K sleep 0:00 0.02% bash
287 root 13 59 0 6736K 5560K sleep 110:25 0.01% picld

$ prstat
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
1864 rdadmin 184M 88M sleep 59 0 0:07:39 98.10% java/53
27794 iwui 222M 186M sleep 59 0 14:42:46 0.9% java/39
26030 rdadmin 4592K 4200K cpu2 49 0 0:00:00 0.1% prstat/1
27708 iwui 52M 34M sleep 59 0 1:22:41 0.1% java/71
24025 root 232M 146M sleep 29 10 11:57:18 0.1% java/29
287 root 6736K 5560K sleep 59 0 1:50:25 0.0% picld/13
15686 oemadmin 107M 93M sleep 29 10 1:13:19 0.0% emagent/5
15675 oemadmin 8096K 7432K sleep 29 10 0:12:34 0.0% perl/1

Depending on the number of CPU in the server and the number of CPU hogging thread. So if there is only 1 thread hogging cpu on a 2 processors server, the process associate with that thread will consume 50%.

2- Identify the thread ID that are hogging all CPU

To do that, we will use the prstat command with a special switch:

bash-2.03# prstat -L
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
1864 rdadmin 985M 577M cpu2 0 2 0:58.24 22% java/84
1864 rdadmin 985M 577M cpu0 0 2 0:51.38 22% java/138
1864 rdadmin 985M 577M cpu3 0 2 0:50.23 22% java/122
1864 rdadmin 985M 577M run 0 2 0:58.41 21% java/145
26430 tomcat 877M 456M sleep 52 2 0:09.27 1.2% java/9
26399 tomcat 985M 577M sleep 52 2 0:11.09 0.9% java/9
26461 tomcat 481M 324M sleep 52 2 0:06.36 0.7% java/9
26430 tomcat 877M 456M sleep 47 2 0:00.11 0.5% java/174
26430 tomcat 877M 456M sleep 28 2 0:00.09 0.3% java/93
26492 tomcat 460M 300M sleep 52 2 0:06.01 0.2% java/9
26430 tomcat 877M 456M sleep 22 2 0:00.14 0.2% java/129
26461 tomcat 481M 324M sleep 32 2 0:00.04 0.2% java/104

Here you can see 4 threads that are using all of the machine cpu.

3- Get the real thread ID

For an unknown to me reason, prstat shows only the LWPID (lightweigth process), see at the end of a prstat output line. We now need to find the real thread ID. Generally, they are the same, but they may be different too. To do that, use the pstack command. Be sure to be logged with the process owner or root. Grep the pstack output to only get the significant lines:

$ pstack 1864 | grep "lwp#" | grep "84"
----------------- lwp# 84 / thread# 83 --------------------

The real thread ID is 83

Now convert that number in hex: 53

4- Get the Java process stack real time

Now comes the interesting part, get the Java process stack for all thread. To do that, you need to send a special signal to the process using the kill command (don't worry, this will not kill the process, only force the JVM to dump its thread stack to the default output log).

We will do this 2 commands at a time, to be sure to get the output from the log. So first change your current directory to tomcat log directory (for a tomcat process).

$ cd /applications/tomcat/servers/tomcat1_rd1/logs

Then send the a signal #3 to the Java process to force it to dump its thread stack, then tail the log.

$ kill -3 1864 ; tail -f catalina.out

Wait maybe 1-3 seconds then you will se a large output be added to the log:

Look at the stack and locate the nid=0x### which is your thread ID in hex (53 in the example)


TP-Processor24" daemon prio=10 tid=0x00a0b180 nid=0x53 runnable [0xb3e80000..0xb3e81a28]
at java.util.ArrayList.indexOf(ArrayList.java:221)
at java.util.ArrayList.contains(ArrayList.java:202)
at com.company.content.business.ContentHandler.getRandomContent(ContentHandler.java:248)
at com.
company.rd.action.CycleContentTypeAction.execute(CycleContentTypeAction.java:100)
at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:480)
at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274)
at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1420)
at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:502)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:689)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
at com.readersdigest.content.filter.MemberContentBundleStatusFilter.doFilter(MemberContentBundleStatusFilter.java:319)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
at com.readersdigest.rd.filter.GenricRequestValuesFilter.doFilter(GenricRequestValuesFilter.java:94)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
at com.readersdigest.rd.filter.BreadcrumbFilter.doFilter(BreadcrumbFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
at com.readersdigest.rd.filter.RdAutoLoginFilter.doFilter(RdAutoLoginFilter.java:71)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
at com.readersdigest.servlet.filters.GrabTrackingParametersFilter.doFilter(GrabTrackingParametersFilter.java:81)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
at com.readersdigest.servlet.filters.HibernateFilter.doFilter(HibernateFilter.java:78)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
at com.readersdigest.servlet.filters.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:134)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148)
at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:307)
at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:385)
at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:748)
at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:678)
at org.apache.jk.common.SocketConnection.runIt(ChannelSocket.java:871)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684)
at java.lang.Thread.run(Thread.java:595)


You now have a running thread stack to investigate possible infinite loops or something else that is hogging all CPU.

Wednesday, September 29, 2010

sorting by date in unix

sort -t'/' -k3n -k2M -k1n

Passing Environment Variables in Sudo

As a normal user, we have to use privilege-gaining tools such as sudo to run programs as the root user when required. With the super user rights in hand, are we still working in the same environments set by the normal user ?

The reason why I ask this question is that I’m losing the environment variables set in ~/.bashrc when running a bash script with sudo. For instance, the variable JAVA_HOME is set and exported in ~/.bashrc.

export JAVA_HOME=/usr/lib/jvm/java-6-sun

A bash script called example.sh will use this variable to find java home directory.

#!/bin/bash

echo $JAVA_HOME

When example.sh is invoked by the current user by typing

./example.sh

the output is exactly the expected one:

/usr/lib/jvm/java-6-sun

However, when trying with

sudo ./example.sh

We get nothing. The variable is lost or in a other word, not inherited.


Solution:

Use the following sudo option

sudo -E ./example.sh

You could also use an interactive sub shell, but thats not a recommended option. I will provide that option any ways..

invoke an interactive sub shell

#!/bin/bash -i

echo $JAVA_HOME


Tuesday, September 28, 2010

Using Xming as your Xserver / Xclient

Installing PuTTY and Xming

With PuTTY and Xming you can run text and graphical programs on our servers from your own PC.

Software you need to download and install:

  1. Xming-fonts (local copy)
  2. Xming (local copy)
  3. PuTTY (local copy)

[Local copyies where last updated at 05-04-2009]

Install the software above in the listed order with their default settings.

Configure PuTTY

You need to enable X11 forwarding to show graphical programs:

  1. Start PuTTY.
  2. In the left menu go to: Connection > SSH > X11.
  3. Check "Enable X11 forwarding".
  4. In the left menu go to: Window > Translation.
  5. Select UTF-8 in the drop down box.
  6. Go back to "Session" in the top of the left menu.
  7. Click "Default Settings" and Save.

Configure Xming

If you don't what to remember to start Xming evertime you what to use it, you can set it to start when Windows startup. To do this, copy the Xming shoutcut from the start menu to the startup folder in the start menu.

Connecting to a server

  1. Find an appropiate server from the server list.
  2. Make sure Xming is running.
  3. Launch PuTTY.
  4. Type the server name in the host name field.
  5. Optimal: Save your session, so you only have the double click on the name next time.
  6. Click Open.

If everything is working, a window with a black backgound will open, prompting you for a user name and password. Try to start a graphical program like Emacs or Matlab.

Troubleshooting

Problem: When I try to start a graphical program, I get "Connection lost to X server".
Solution: Check that Xming is started.

Problem: When I try to start a graphical program, I get "connection refused by server".
Solution: Make sure you have a token by using the "klog" command.

Problem: Programs start up in text mode.
Solution: Check that "X11 forwarding" is enable in PuTTY. If you use a stored session, load it, enable "X11 forwarding", and save it.

Problem: Letters are displayed as boxes in graphical programs.
Solution: Install Xming-fonts and restart Xming.

Problem: Graphical programs are very slow.
Solution: Enabling "SSH compression" might help if you use a low bandwidth connection. The option is under Connection > SSH in the left menu of PuTTY.

Problem: In Matlab backspace and return does not work.
Solution: Turn off numlock.

Tips to reduce WebLogic Startup Time

This is going to be a living document on how to reduce weblogic startup times.

1) Apply -D_Offline_FileDataArchive=true

The WebLogic Diagnostics Framework creates the diagnostic store (DAT) file which is used for different purposes. The initial size of the store file is about 1M.

Here are the details of the parameter suggested: -D_Offline_FileDataArchive=true

The above parameter is a (undocumented and unsupported) system property that disables WLDF indexing on the log file.

Engineering has delivered an undocumented system property to disable the file archive indexing task (-D_Offline_FileDataArchive=true),
This property will disable incremental as well as full indexing of WLDF archiving, if property is not properly applied, you can see a File Indexer timer kicks in for every 30 seconds.

The parameter is discussed in the bug# 8101514, where customer has encountered High CPU usage because of this indexing and it has resolved couple of Production issues as well.

The indexing takes place when ever we create a WLDF module.

URL for WLDF:
http://download-llnw.oracle.com/docs/cd/E13222_01/wls/docs100/wldf_configuring/index.html

2) WLW page-flow event reporter may be creating several entries in the diagnostics store

apply -Dcom.bea.wlw.netui.disableInstrumentation=true

3) Disable JDBC profiling. (Under JDBC/datasource_name/diagnostics uncheck all the options there)

If JDBC profiling is turned on, it inserts profiling data into the diagnostic store, which can cause it to grow.

URL for JDBC profiling:
http://download-llnw.oracle.com/docs/cd/E13222_01/wls/docs100/ConsoleHelp/pagehelp/JDBCjdbcdatasourcesjdbcdatasourceconfigdiagnosticstitle.html

4) Disable Harvesting (change the value for Profile Harvest Frequency Seconds to zero)

5) Try setting the "synchronous-write-policy" parameter in the server configuration to "Cache-Flush". This bypasses the "direct I/O" code in the file store.

Servers--->Configuration--->Services--->Default Store you can set the synchronous-write-policy to Cache-Flush

Synchronous Write Policy:

The disk write policy that determines how the file store writes data to disk.

This policy also affects the JMS file store's performance, scalability, and reliability. The valid policy options are:

* Direct-Write

File store writes are written directly to disk. This policy is supported on Solaris, HP-UX, and Windows. On Windows systems, this option generally performs faster than the Cache-Flush option writes.

* Disabled

Transactions are complete as soon as their writes are cached in memory, instead of waiting for the writes to successfully reach the disk. This policy is the fastest, but the least reliable (that is, transactionally safe). It can be more than 100 times faster than the other policies, but power outages or operating system failures can cause lost and/or duplicate messages.

* Cache-Flush

Transactions cannot complete until all of their writes have been flushed down to disk. This policy is reliable and scales well as the number of simultaneous users increases.

6)
If you are using jrockit under GNU/Linux, you might be hitting a bug. Inside jrockit installation directory, jre/lib/security, edit java.security and locate the line that defines the random source:

securerandom.source=file:/dev/urandom

Comment it out, and replace it by:

securerandom.source=file:/dev/./urandom

7)
You can also break out the JMS servers from the instance and allocate them into a seperate managed server - WLS would have to read the JMS file store on every start
Or compact the JMS file store regularly to keep its size small

How does WebLogic Serve Requests

1. A client contacts the ListenThread, the entry point into WebLogic Server, which accepts the connection. It then registers the socket with a WLS component known as the SocketMuxer for further processing.

2. The SocketMuxer is responsible for reading and dispatching all client requests to the proper WLS container. It then adds this socket to an internal data structure for processing and makes a request of an ExecuteThreadManager to create a new SocketReaderRequest. This request is then dispatched by the manager to an ExecuteThread

3. As a result, the ExecuteThread becomes a SocketReader thread - it will continually run the SocketMuxer’s processSockets and checks the muxer’s queue to determine if there is work to be done. If an entry exists, it pulls it off the queue and processes it.

4. The SocketReader thread reads the client request and determines the protocol type of this client request, to create a new protocol specific MuxableSocket.

5. The MuxableSocketDiscrminator stores a new MuxableSocket with the implementation matching the protocol of the client. It also returns true to the SocketReader to notify it that the message is complete and it can be dispatched.

6. MuxableSocketDiscriminator re-registers the protocol specific version of the MuxableSocket that was created earlier. The net result is that “Step 2” is repeated, and a new protocol specific MuxableSocket is placed in the SocketMuxer’s queue for processing.

7. A socket reader will get the new protocol specific MuxableSocket off the queue and read it. It will then checks to see if the message is complete, based on the protocol. If it is, it will invoke the protocol specific MuxableSocketDiscriminator

8. Before the work requested by the client can be performed, there may be many iterations of “step 7”. This is determined by the protocol – for example, t3 will read a portion of the message, dispatch it so it can act upon the portion of the protocol read thus far.

9. The subsystem will create an ExecuteRequest and send it to an ExecuteThreadManager for processing. The request is dispatched to an ExecuteThread, and the result is returned to the client.


From a high level overview’s perspective, the SocketMuxer can be explained as follows. Each and every socket connection that comes into WebLogic Server is “registered” with the SocketMuxer - which then maintains a list of these connections, each represented by a form of the MuxableSocket interface. It then becomes the responsibility of the SocketMuxer to read and dispatch each client request to the appropriate subsystem. This is a fairly elaborate process, which is illustrated by steps 2 through 8 above.

There are only a few key things to know about the SocketMuxer:

First, it has a data structure in which it stores a socket entry for each client connected to WebLogic Server.
Second, a “socket reader” is the main component of the SocketMuxer - which really is just an execute thread that is running the SocketMuxer’s processSockets() method.

Third, the SocketMuxer does most of its work through the only interface it knows how to operate on – the MuxableSocket interface.

Socket Reader:

A SocketReaderRequest is merely an implementation of an ExecuteRequest, which is sent to the ExecuteThreadManger by the invocation of the registerSocket(). When the ExecuteThread invokes the execute() method of the SocketReaderRequest, the SocketMuxer’s processSockets() method is invoked.
So, a socket reader thread is simply a normal execute thread which runs the main processing method of the SocketMuxer, processSockets().


The acceptBacklog parameter of Weblogic server is passed to ServerSocket. The value of acceptBacklog means "The maximum queue length for incoming connection indications (a request to connect) is set to the backlog parameter. If a connection indication arrives when the queue is full, the connection is refused. "

Thus if too many connects come on the server at the same time, the server would queue this connects and process them one at a time. The value does not mean that only that many clients can connect to the server.

It does not limit the number of connections made. It limits the number of potential connections that can lie in the backlog queue. So for e.g.: AcceptBacklog is 2. If hundreds of connections were made to the server and the server has one thread to accept new connections.

This thread accepts the new connection and dispatches it to a new thread and then goes back to listening to new connections. Sample code is

while (true) {

Socket sock = serversocket.accept(); // Line 1
new MyThread(sock).run(); // Line 2

}

Here the thread accepts a new connection at line1. Dispatches to new thread in line 2. Evaluates the while expression and goes back to line 1. So in between the time it takes for it to get back to line 1(say T1) many new connections requests are made by the clients. These new connections lie on the accept backlog queue and this queue length is controlled by the accept backlog parameter.

If the queue length is 2, and between this time T1 several hundred connections are made to the server only 2 would get accepted and rest of them rejected. For rejection there must be too many simultaneous requests to the server, if it’s not simultaneous then the chances of queue getting full is less.

Thursday, June 10, 2010

Enabling Debug Flags on Apache WebServer For WebLogic

  • Sample httpd.conf file:
  •   WebLogicCluster johndoe02:8005,johndoe:8006
      Debug   ON
      WLLogFile             c:/tmp/global_proxy.log 
      WLTempDir             "c:/myTemp"
      DebugConfigInfo       On
      KeepAliveEnabled ON
      KeepAliveSecs  15
      SetHandler weblogic-handler
      WebLogicCluster agarwalp01:7001
      SetHandler weblogic-handler
      PathTrim  /web
      Debug   OFF
      WLLogFile   c:/tmp/web_log.log
      SetHandler weblogic-handler
      PathTrim  /foo
      Debug   ERR
      WLLogFile   c:/tmp/foo_proxy.log
  • All the requests which match /jurl/* will have Debug Level set to ALL and log messages will be logged to c:/tmp/global_proxy.log file. All the requests which match /web/* will have Debug Level set to OFF and no log messages will be logged. All the requests which match /foo/* will have Debug Level set to ERR and log messages will be logged to c:/tmp/foo_proxy.log file

Monday, April 26, 2010

Using the Sun Java System Web Server 7.0 Cluster

Pre-requisite: A server farm must be present. To create a server farm


1. Install one admin server
2 Install admin agents.
3. Register the admin agents to the admin server

(Covered in previous post)

Assumptions: The config is called config1, the admin server is on the node called server1 and the admin agents registered to it are on the nodes called server2 and server3.

1.Create a config

The first step is to create a configuration

CLI: create-config

Usage: wadm> create-config

Usage: create-config --help|-?

or create-config [--echo] [--no-prompt] [--verbose] [--document-root=serverdocroot] [--jdk-home=JAVA_HOME] [--server-user=userid] [--ip=ip] --http-port=port

--server-name=servername config-name

Example: create-config –http-port=3456 –server-name=server1 config1

2. Create multiple configs

It is possible to create multiple configurations one at a time on an admin server. The directory for that particular config gets created under /admin-server/config-store.

The CLI create-config however creates only one configuration at a time.

3. Create an instance

CLI: create-instance

Usage: wadm> create-instance

Usage: create-instance --help|-?

or create-instance [--echo] [--no-prompt] [--verbose] --config=name (nodehost)+

This can be used to create one or more instances of a particular configuration.

Example: create-instance –config=config1 server1

This will create one instance server1 of the configuration config1

server1 can be the admin server or can be any one of the admin agents registered to it.

4. Create multiple instances

The CLI create-instance can be used to create multiple instances of a particular config.

CLI: create-instance

This can be used to create one or more instances of a particular configuration.

Example: create-instance –config=config1 server1 server2 server3

This will create one instance each on server1, server2 and server3 of the configuration config1

server1, server2, server3 – one of them can be the admin server itself. The others must be admin agents registered to the admin server.

5. Deploy config

CLI: deploy-config

If changes are made to a configuration the deploy-config CLI will have to be used to deploy the configuration onto the nodes. Deploy config will NOT create new instances. So it will deploy changes only onto an already created instance.

6. Deploy config with no changes -prompts

If there is no change made to a configuration then the deploy does not go through.It prompts saying that deploy failed due to no changes being there in the config store.

7. Make a manual change in the config , deploy

You can make manual changes onto the config store and deploy that configuration. This is expected to work, but is not recommended.

8. Make a CLI change into the config , deploy

You can make changes into the config using CLI, say log level changes

set-error-log-prop –config=config1 log-level=fine

Now deploy the config. The log level changes will cascade to all the instances. Some of the config changes will require server restart ie require you to restart the instance. This will be prompted at the time of deploy.

9. Add a webapp to a config using CLI

The sample webapps will be available with the admin server if web apps are selected as one of the components during server installation.

Once installed they will be available at

/samples.

eg. The webapp called “simple” bundled with the webserver is available at

/samples/java/webapps

To add a webapp to a config use CLI: add-webapp

add-webapp –config=config1 –vs=config1 –uri=/simple warfile

warfile is the warfile of the application. You can specify the full path.

10. Deploy webapp and access for all instances

After adding the webapp, the webapp can be deployed to all instances using deploy-config CLI ( see 8) and this can be accessed on all instances using a load balancer or directly.
11. Create a new doc directory, put a simple html and access it

A new document directory can be added from the GUI using Common Tasks->Virtual Server Tasks -> Document Directories

Add the new document directory here. Put a simple html file in this location. Deploy the configuration.
12. Create a shared document directory. Shared on admin server in militarized zone

You can create a document directory on the admin server in the militarized zone. And share it and make it available on the admin agents and use that directory as the doc directory.
13. Deploy a simple webapp using ant

You can deploy a webapp using ant (as specified in the documentation for each webapp in /samples ) and then deploy the config as mentioned in 7.

14. The server name specified in creating a config (see 1) can be used as the server name for accessing content on all the instances of the same config.

15. Deploy a jdbc webapp

A jdbc webapp can be deployed as per instuctions given in /samples/java/webapps/jdbc. On deploy the webapp will get deployed to all instances.

16. Deleting an instances

An instance can be deleted using CLI.

Delete-instance –config=config1 server1

17. Deleting a config

A config can be deleted only once all the instances associated with it are deleted.

Delete-config –config=config1

18. Pulling a config

pull-config CLI can be used to pull a config.

Suppose some changes are made on an instance. And these changes are intended to be cascaded to all the other instances. This can be done using a pull-config.

Pull-config –config=config1 server1

This will pull the config of the instance of config1 on server1 and overwrite the config store with this config.

A deploy will then deploy these changes onto all the instances of the config.

19. Forcing a deploy

Suppose some changes are made on an instance. And the intent is to over-write the changes with the config in the config store.

In that case, a deploy can be forced.

eg. deploy-config –force=true config1

20. Synchronization

Synchronization happens only with a failed deploy.

The scenarios are

a. Make a change into the config, one node is down. When node comes up, the changes will be visible at the node.

b. Make a change into the config, more than one node down. As and when the nodes come back up, the changes will be visible in them.

c. Create-instance will fail on a node which does not have the admin agent running.

d. delete-instance will fail on a node which does not have the admin agent running

e. Make changes to config. Admin agent is down. Then admin server goes down. Node comes up. Once the admin server comes back up, all the admin agent instances will be synchronized.

Creating a Sun Java System Web Server 7.0 Cluster

Clustering Functionality is new in SJSWS 7.0. This is a neat functionality which lets you cluster / manage all your webserver instances in a server farm.

For setting up the cluster,

you need to first install one administration server and one or more admin agents. The admin agents have to be then registered to the admin server individually for them to be able to be admininstered. This step can be done either during installation of the agents or after installation through the command line interface.

This post is divided into three parts
1. Installaling the admin server and admin agent
2. Registering the admin agents to the admin server
3. Start using the cluster

Step 1: Install your admin server and your admin agent(s)

The admin server and the admin agent have to be on different machines. Start by installing your admin server. There are two ways you can install:
a. Through the GUI
b. Through the command line interface

The admin server(same as in 6.1) is the server you can log into to administer all the machines in a cluster.
You can choose the Express installation option which will install an administration server on the port 8989.
Alternatively, choose the option "Custom Installation" after accepting the licence agreement. To install the admin server choose the option "Install server as Administration Server". You need to specify the SSL port and may or may not choose a non SSL port. If a non SSL port is selected an admin agent is created in the admin server node and this need not be registered to the admin server explicitly.

Admin agent is new in WS 7.0. In WS 7.0 there is support for implicit clustering. An admin agent is nothing but an admin server configured differently. An admin agent does not provide a GUI interface. It is simply an agent for the admin server to command. One node in the server farm has the admin server installed. All other nodes in the server farm will have admin agents installed. An admin agent is registered with an admin server upon installation. This will make the admin server aware of that admin agent.

To install the admin agent , choose Custom installation and then "Install server as administration Agent". You cannot use an Express install to for a agent.Specify a port for installation. This is an SSL port. All communication between admin server and admin agent is always secure. This install will ask you if you want to register the agent to the admin server. For more information on registration read the next section.

Once you do this, you can install as many agents as you want. And voila! You have the servers ready to setup a cluster.

Step 2: Register your admin agents to the admin server

The admin agents have to be registered to the admin server for them to be part of the server-farm or the cluster. The admin agents will not start up unless they are registered to an admin server.

There are two ways of registering:
a. During installation: When you custom install an admin agent, it will ask you if you want to register the agent to the admin server. If you want to register it to an admin server then the admin server has to be installed and started. If the admin server is not started, registration will fail.
b. Through the command line interface
Go to the /bin/ of the agent. An agent can be installed from the agent ONLY. You cannot go to the CLI of the admin server and register any agent.
Execute ./wadm --user --port --host
This port is the one specified during install. Host is the hostname of the node where the admin server is installed. This will take you to the wadm prompt. At the prompt execute
wadm> register-agent
This will pick up the necessary information of host and port from the server.xml of the agent and register the agent to the admin server.

Once the admin agent is up you can start the admin agent also.

One place where it is easy to get confused is the existence of the admin-server directory in an agent under the install-dir. Starting this merely means the admin-server can coomunicate with the admin-agent. This does not mean you can administer the agent from itself.

Step 3: Start Using the Cluster

You can use the wadm on any of the nodes on the server farm to connect to the admin server. (As described in Step 2).
So now you have your cluster all up and running. Now you can create configs, deploy them on the local and remote nodes, create instances and modify them all from one single place!

Tuesday, April 13, 2010

Perimeter Authentication with Identity Assertion - Part 2

This is in continuation of my earlier blog about perimeter authentication with Identity Assertion. Here we will talk about Perimeter Authentication and WebLogic Identity Assertion.

Imagine that you have some external system—say, a Java client or perhaps even an external web server—that authenticates a user, and you now want this user to participate in actions involving WebLogic. Furthermore, you don't want WebLogic to reauthenticate the user. Rather, you want to use some token generated by the external system to be used as an automatic WebLogic login. This is fairly typical of many single sign-on scenarios. The key to implementing this is to use an Identity Assertion Provider. Let's look at how you can implement such a scenario.

We are going to take as an example an external Java client that has presumably performed some user authentication, and who now needs to transfer this identity to WebLogic in order to access a protected web application. First of all, let's configure the web application to use identity assertion. Do this by setting the login-config to use a CLIENT-CERT authorization method. As this is standard J2EE, you will need to create a web.xml file with something such as the following in it




CLIENT-CERT
myrealm






Now let's imagine we have a client (written in whatever language you wish) that has already performed some user authentication and now needs to access one of the protected web pages—say, http://10.0.10.10:8001/index.jsp. The following client is such an example:


URL url = new URL("http://10.0.10.10:8001/index.jsp");
URLConnection connection = url.openConnection( );
BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream( )));
// Read the input stream
in.close( );



If you simply run this program, you can expect an IOException when you try and access the input stream. This will be the 401 HTTP error code indicating that you are not authorized. We are going to get around this by making the client supply a token, and then configuring an Identity Assertion Provider to accept this token and authorize the user. Identity Assertion Providers can automatically take advantage of request cookies or headers. If WebLogic finds, for example, a header property with the same name as a token (we will see in a moment how to configure the identity provider with token names), it assumes that the content of the header property is the value of the token. The token we will use is a simple string that we are going to send in the HTTP request header when we create the connection to the server. To this end, modify the preceding code to read as follows:

URL url = new URL(urlAddr);
URLConnection connection = url.openConnection( );
connection.setRequestProperty("MyToken",encodedToken);
// Everything as before

The name of the request property, MyToken in our example, is significant. This is interpreted as the type of the token, which we will see later. A small caveat here is that WebLogic always expects incoming tokens to be Base 64-encoded. You can do this by using the utility class weblogic.utils.encoders.BASE64Encoder. So, to create an encoded token, you can write something such as this:

String token = "jon";
BASE64Encoder encoder = new BASE64Encoder( );
String encodedToken = encoder.encodeBuffer(token.getBytes( ));

The text that you place in the token can be anything you please, as long as your Identity Assertion Provider can read it. In our example, we will use a simple string, which we take to represent the authenticated user.

Note: WebLogic 8.1 allows you to configure the Identity Assertion Provider to use tokens that aren't encoded, in which case you won't need to use an encoder.

All that's left now is to create an Identity Assertion Provider. The MBean definition file used in our example is given in Example 17-8 in full.

Example 17-8. MyA.xml, the MDF file for the assertion provider



com.oreilly.wlguide.security.iap">
Extends = "weblogic.management.security.authentication.IdentityAsserter"
PersistPolicy = "OnUpdate"
>
com.oreilly.wlguide.security.iap.MyAProviderImpl"">
/>




Note the following things:

  • Because we are writing an Identity Asserter, it must extend the weblogic.management.security.authentication.IdentityAsserter MBean as indicated.
  • As always, the ProviderClassName attribute must be set to the implementation class.
  • The SupportedTypes attribute must be set to the token type. In our case, this is MyToken.
  • The ActiveTypes attribute lists the subset of the provider's supported types that you want active. Because we want our only token active, we set it to MyToken as well.

You can create the support files as usual. Here we place all the output in the directory out:

java -DcreateStubs="true" weblogic.management.commo.WebLogicMBeanMaker -MDF MyA.xml
-files out

Finally, you need to create the provider class com.oreilly.wlguide.security.iap.MyAProviderImpl, which was referred to in the ProviderClassName attribute.

Example 17-9 lists this class in its entirety.

Example 17-9. The provider implementation

package com.oreilly.wlguide.security.iap;

import javax.security.auth.callback.CallbackHandler;
import javax.security.auth.login.AppConfigurationEntry;
import weblogic.management.security.ProviderMBean;
import weblogic.security.spi.*;

public final class MyAProviderImpl
implements AuthenticationProvider, IdentityAsserter {
private String description; // holds our description which we derive from MBean
attributes

public void initialize(ProviderMBean mbean, SecurityServices services) {
MyAMBean myMBean = (MyAMBean)mbean;
description = myMBean.getDescription( ) + "\n" + myMBean.getVersion( );
}

public CallbackHandler assertIdentity(String type, Object token)
throws IdentityAssertionException {
if (type.equals("MyToken")) {
byte[] tokenRaw = (byte[])token;
String username = new String(tokenRaw);
return new SimpleSampleCallbackHandlerImpl(username,null,null);
} else
throw new IdentityAssertionException("Strange Token!");
}
public String getDescription( ) {
return description;
}
public void shutdown( ) {
}
public IdentityAsserter getIdentityAsserter( ) {
return this; // this object is the identity asserter
}
public AppConfigurationEntry getLoginModuleConfiguration( ) {
return null; // we are not an authenticator
}
public AppConfigurationEntry getAssertionModuleConfiguration( ) {
return null; // we are not an authenticator
}
public PrincipalValidator getPrincipalValidator( ) {
return null; // we are not an authenticator
}
}
}

The most important methods are initialize( ) and assertIdentity( ). The initialize() method simply extracts some information from the MBean representing the provider and uses it to create the description. The assertIdentity( ) method is given two parameters, the type of the token and the token itself. We simply check that the token type is correct and map the token to the username. You could conceivably do a lot more here, such as validate the authenticity of the token for stronger security. The method must return a standard JAAS callback handler, which eventually will be invoked to extract the username (that is, only the NameCallback will be used). We use the callback handler that we defined in Example 17-4. Note that the identity asserter could have been an authenticator too, in which case it could populate the subject with usernames and groups belonging to the user. Because we are doing pure identity assertion, the corresponding methods simply return null.

Place this file and the callback handler in the out directory, and then issue the following command to create a packaged provider:

java weblogic.management.commo.WebLogicMBeanMaker -MJF myIAP.jar -files out

Copy this to the WL_HOME/server/lib/mbeantypes directory, and then reboot the server. Start up the Administration Console and navigate to the Security/myrealm Providers/Authentication node. In the list of available authenticators and identity asserters, you should find an option for "Configure a new MyA...". Selecting this option and clicking Create will configure the identity asserter. On the following tab you will notice that the support token type is set to MyToken and the active token to MyToken too. You will now have to reboot the server for this change to take effect.

If you rerun the client application, you will find that you will no longer get an unauthorized warning (assuming that jon is in the permission group mysecrole, which was granted access to the web resource). To further illustrate the point, you can try accessing a servlet or JSP page in this way, which has a call to request. getUserPrincipal( ). You will find that this call returns jon as you would expect.

So, here is a summary of what happens, as was illustrated in Figure 17-2:

  1. The client attempts to access a protected web page. The web container notes that the client does not have any security credentials and that the web application implements identity assertion, so it fires up the Identity Assertion Providers, passing in the appropriate request parameters.

  2. The Identity Asserter grabs the username directly from the incoming token and returns it in the form of a callback handler.

  3. Any login modules that you have configured for the security realm then fire, using the callback handler to fetch the username. So, for example, the Default Authenticator will fire and log in the user. However, because it knows that the data comes from the Identity Asserter, it will not require a password. As a result, the user is logged in and can now access the web application.

Wednesday, March 31, 2010

Log4J configuration – controlling logging to multiple loggers

Log4J configuration – controlling logging to multiple loggers

I recently happened to read a blog about Apache Lo44J java logging API where the blogger mentioned that there is no clean good document/tutorial about using log4j, considering the fact that it is one of the most widely used open source java API. I felt that it is true to some extent, when I had to look for some help regarding a logging issue I encountered. Especially, I could not find one authentic manual/tutorial explaining the log4j configuration.


My requirement was to control the logging of messages to multiple appenders (targets) with different log priority. For example, I wanted to a log message to write log only to file if priority is debug but wanted to write to both console and log file if priority is set to info or above. With the help of some blogs and log4j javadocs, I found ways to do this. In this blog, I am explaining what I understood, assuming that it might help others looking for this info.


I used XML configuration for configuring log4j in my project, so I will use same in this article. I assume that you (reader of this blog) have basic understanding of Log4J configuration, if not please read this short manual at apache. I expect you to understand Log4J terms like Logger, Log Level and the XML tags like <category>, , etc., those appear in log4j configuration.


Note: wherever I mention “logger” (starting with lower case l), I referelement <category> in log4j configuration xml. I mean wherever I mention root logger and tag wherever I mention “appender”.


As I mentioned above, my requirement was to log messages to multiple appenders (targets), console and a file, But I wanted to all messages (any level – level DEBUG or above) logged to file and only messages with log level ERROR or higher to the console. Following is my log4j.xml


xml version="1.0" encoding="UTF-8"?>

DOCTYPE log4j:configuration SYSTEM "log4j.dtd">

<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">


<appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender">

<layout class="org.apache.log4j.PatternLayout">

<param name="ConversionPattern" value="%d [%t] %-5p %c (%F:%L) - %m%n"/>

layout>

appender>


<appender name="FILE" class="org.apache.log4j.DailyRollingFileAppender">

<param name="File" value="TestLogFile.log"/>

<param name="DatePattern" value="'.'yyyy-MM-dd"/>

<param name="Append" value="true"/>

<layout class="org.apache.log4j.PatternLayout">

<param name="ConversionPattern" value="%d [%t] %-5p %c (%F:%L) - %m%n"/>

layout>

appender>


<category name="com.ibswings">

<priority value="debug"/>

<appender-ref ref="FILE"/>

category>


<root>

<priority value="error"/>

<appender-ref ref="CONSOLE"/>

root>

log4j:configuration>


A simple class I used to test this:


package com.ibswings.loggertest;


import org.apache.log4j.Logger;


public class Test {


public static void main(String[] args) {

new Test();

}


public Test() {

Logger logger = Logger.getLogger(getClass());

logger.debug("Debug Message");

logger.info("Info Message");

logger.error("Error Message");

}

}


I was expecting this to print all three messages from code to be printed to the log TestLogFile.log and only error message to be printed to console, since I have set ‘error’ priority in <root>. However I noticed that all messages are printed to both log file and console like:


2009-03-07 19:32:00,893 DEBUG Test.java:14 - Debug Message

2009-03-07 19:32:00,893 INFO Test.java:15 - Info Message

2009-03-07 19:32:00,893 ERROR Test.java:16 - Error Message


Let’s look into this in detail. By default, when a message is logged, the message will go to the first logger () whose names closely matches the name of the Logger instance you created in Java code. From there, it is directed to the next matching logger up in the logger hierarchy until root logger. In each of these loggers, output is sent to all appenders indentified by definitions. In the above example, I had created logger in my code like:

Logger logger = Logger.getLogger(getClass());


Actually, this is equivalent of

Logger logger = Logger.getLogger(“com.ibswings.loggertest.Test”);


When following line is executed,

logger.debug("Debug Message");


The closest matching logger defined in config is <category name="com.ibswings". This logs output to TestLogFile.log through appender named “FILE” (Note that if I had a logger defined in log4j.xml with name "com.ibswings.loggertest", that would have been the first logger to get the message). Message directed to root logger from here, since there is no other logger matching the name in between. And, message is printed out to standard out by CONSOLE appender. Also, when message is logged by root, it is logged with priority of message it received it from previous logger. Hence all messages are printed to console in this case, even though we have set error priority in root. The priority set in root has two uses – first, it is the default priority of all other loggers without priorities set. Second, all messages which are only received by root logger (i.e. messages with no matching loggers in log4j.xml) will have this default priority enabled. In following section, let’s see how we can avoid duplicating messages in both log file and console.


How to log only my messages to log file and others to console?


As I mentioned above, output from each logger is directed to next logger above in the logger hierarchy, till root logger. To set behavior off, we need use “addictivity” attribute in . If addictivity is set to “false” in a particular logger, output will not be sent to next logger up in the hierarchy. Or, output is sent from logger to logger, until it reaches a logger (of course, before root) in which addictivity is set to false. By default (when not specified) addivity is set to true. That’s how in the previous case messages are printed to both file and console. Let’s use this in out log4j.xml to stop messages from printing in console.


xml version="1.0" encoding="UTF-8"?>

DOCTYPE log4j:configuration SYSTEM "log4j.dtd">

<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">


<appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender">

<layout class="org.apache.log4j.PatternLayout">

<param name="ConversionPattern" value="%d [%t] %-5p %c (%F:%L) - %m%n"/>

layout>

appender>


<appender name="FILE" class="org.apache.log4j.DailyRollingFileAppender">

<param name="File" value="TestLogFile.log"/>

<param name="DatePattern" value="'.'yyyy-MM-dd"/>

<param name="Append" value="true"/>

<layout class="org.apache.log4j.PatternLayout">

<param name="ConversionPattern" value="%d [%t] %-5p %c (%F:%L) - %m%n"/>

layout>

appender>


<category name="com.ibswings" additivity="false">

<priority value="debug"/>

<appender-ref ref="FILE"/>

category>


<root>

<priority value="error"/>

<appender-ref ref="CONSOLE"/>

root>

log4j:configuration>


With this log4j.xml, all messages from the code are logged to file only. Since we set additivity="false" in the first logger, output is not directed to root logger. In this case, other Logger instances with name not matching “com.ibswings” and any System.out.printxx() calls go to console. In the following section, let’s see how to send all messages from our java code to file and some selected messages (based on priority) to console.


How to log all my messages to log file and selected (based on priority) my messages to console?


Coming to my original requirement, I want to send all my messages (printed from my java code) to log file. Also, I want to send all error messages from my code to console also. At the same time, I want to see all other messages (from other Logger instances and SOP calls) in console.


To achieve this, let’s assume we add appender-ref to console in our first logger. Now question is how to send only error messages, since whatever priority we set to in logger is applicable to all appenders. The solution is to use “Threshold” parameter in appender definition. If we specify a particular priority in appender using threshold, only messages with priority equal of higher to priority specified in ‘threshold’ are printed to the target. However, in this case, we cannot set ‘threshold’ to error in CONSOLE appender, since we need to print all messages (any level) not covered by our logger to console, in root logger. Let’s create a new appender CONSOLE_1 to log messages to console. The configuration is given below.


xml version="1.0" encoding="UTF-8"?>

DOCTYPE log4j:configuration SYSTEM "log4j.dtd">

<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">


<appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender">

<layout class="org.apache.log4j.PatternLayout">

<param name="ConversionPattern" value="%d [%t] %-5p %c (%F:%L) - %m%n"/>

layout>

appender>


<appender name="CONSOLE_1" class="org.apache.log4j.ConsoleAppender">

<param name="Threshold" value="ERROR"/>

<layout class="org.apache.log4j.PatternLayout">

<param name="ConversionPattern" value="%d [%t] %-5p %c (%F:%L) - %m%n"/>

layout>

appender>


<appender name="FILE" class="org.apache.log4j.DailyRollingFileAppender">

<param name="File" value="TestLogFile.log"/>

<param name="DatePattern" value="'.'yyyy-MM-dd"/>

<param name="Append" value="true"/>

<layout class="org.apache.log4j.PatternLayout">

<param name="ConversionPattern" value="%d [%t] %-5p %c (%F:%L) - %m%n"/>

layout>

appender>


<category name="com.ibswings" additivity="false">

<priority value="debug"/>

<appender-ref ref="FILE"/>

<appender-ref ref="CONSOLE_1"/>

category>


<root>

<priority value="error"/>

<appender-ref ref="CONSOLE"/>

root>

log4j:configuration>


When I ran my test, I see all message logged to file and error message printed in console. Also, all other messages with any level and SOPs printed to console, since we have appender CONSOLE referenced in root logger. In following section, let’s see how to all my messages and all other error messages, only to log file. Following is the log4j.xml we can use to achieve this.


How to log all my messages and all other error messages, only to log file?


xml version="1.0" encoding="UTF-8"?>

DOCTYPE log4j:configuration SYSTEM "log4j.dtd">

<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">


<appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender">

<layout class="org.apache.log4j.PatternLayout">

<param name="ConversionPattern" value="%d [%t] %-5p %c (%F:%L) - %m%n"/>

layout>

appender>


<appender name="FILE" class="org.apache.log4j.DailyRollingFileAppender">

<param name="File" value="TestLogFile.log"/>

<param name="DatePattern" value="'.'yyyy-MM-dd"/>

<param name="Append" value="true"/>

<layout class="org.apache.log4j.PatternLayout">

<param name="ConversionPattern" value="%d [%t] %-5p %c (%F:%L) - %m%n"/>

layout>

appender>


<category name="com.ibswings" additivity="false">

<priority value="debug"/>

<appender-ref ref="FILE"/>

category>


<root>

<priority value="error"/>

<appender-ref ref="FILE"/>

root>

log4j:configuration>


Note that I have commented appender-ref CONSOLE in root logger. With this log4j.xml, all messages logged from my java code are printed to log file. At the same time any other message with priority error is also printed to log file. All these will be logged only to the log file, not to console. However, we will still see SOPs in the logger (if we use this in an Application server, we will see lot of such messages in console/standard out or error log files).