If you happen to have operate an application iwth long request / response cycles and you experience pile ups of CLOSE_WAITs, you should consider the following:
a) redesign you application. If possible, try to detach processes asynchronously, by using threading, JMS or Akka. An application should not rely on the client’s patience for longer than a few seconds. Submitting a request, retrieving a transaction token and picking up the results later is a legitimate paradigm. Airlines do that while processing booking requests for a good reason.
b) If for some reasons that’s not possible or you choose to rely on another mechanism. You’ll want to make sure that timeouts are aligned correctly between your tomcat and the LB, typically a mod_proxy setup.
On top of that, you’ll also need to switch to a connector that actually handles FYNs and timeouts. The classic (blocking) IO (aka BIO) connector model is incapable of doing that. Even APR has some issues with that, even though compiling the APR native connectors often does the trick. Ultimately, the new NIO connectors are exactly what you’ll need.
Let’s review the configuration:
ProxyPass / balancer://mycluster_de/ stickysession=BALANCEID nofailover=On
ProxyPassReverse / balancer://mycluster_de/
BalancerMember ajp://app01:8009 route=identifier01
BalancerMember ajp://app02:8009 route=identifier02
BalancerMember ajp://app03:8009 route=identifier03
BalancerMember ajp://app04:8009 route=identifier04
<Connector port="8009" protocol="org.apache.coyote.ajp.AjpNioProtocol"
The protocol bit is important the default is auto sensing and never switches over to NIO.
In order to count CLOSE_WAITS, you can issue the following command:
netstat -n | grep CLOSE | wc -l