Introduction
This article presents a set of tools, system settings, and tuning tips for Java server applications that run on and scale across 2 to 64 CPU Sun Enterprise servers. This information was assembled by engineers with many years of experience tuning a variety of commercial server-side Java applications on Solaris.
Analysis Tools
The table below lists the performance analysis tools covered in this article. The tools are distinguished by software layer. In addition to performance issues, many of these tools can be used to detect other types of bottlenecks.
Click on a Name or a Parameter to link to a particular topic. Many tool descriptions provide sample output, suggestions for interpreting output results, tips on improving output results, and links to related sites.
Solaris 8 Tools
mpstat
The mpstat
utility is a useful tool to monitor CPU utilization, especially with multithreaded applications running on multiprocessor machines, which is a typical configuration for enterprise solutions.
mpstat
with an argument between 5 seconds to 10 seconds will be quite non-intrusive to monitor; larger arguments, such as 60 seconds, might be suitable for certain applications. Statistics are gathered for each clock tick.
An interval that is smaller than 5 or 10 seconds will be more difficult to analyze. A larger interval might provide a means of smoothing the data by removing spikes that could mislead you during analysis.
mpstat
output
#mpstat 10
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 1 0 5529 442 302 419 166 12 196 0 775 95 5 0 0
1 1 0 220 237 100 383 161 41 95 0 450 96 4 0 0
4 0 0 27 192 100 178 94 38 44 0 100 99 1 0 0
5 1 0 160 255 100 566 202 28 162 0 1286 87 8 0 5
8 0 0 131 283 100 684 238 30 203 0 1396 81 11 0 8
9 1 0 165 263 100 579 212 23 162 0 1260 86 10 0 4
10 1 0 208 255 100 553 213 12 179 0 1430 88 11 0 1
11 0 0 116 255 100 698 207 48 221 0 1310 76 14 0 10
12 2 0 239 252 100 584 215 8 152 0 1529 90 8 0 2
13 0 0 110 275 100 459 200 36 100 0 619 96 4 0 0
14 1 0 145 263 100 583 218 18 165 0 1389 88 7 0 4
15 1 0 165 254 100 1404 587 26 179 0 2117 82 11 0 7
16 0 0 133 278 100 523 215 26 130 0 1068 93 6 0 2
17 0 0 77 292 100 506 219 35 117 0 657 94 4 0 2
18 1 0 235 257 100 655 218 25 185 0 1722 85 9 0 5
19 1 0 193 255 100 576 212 14 164 0 1485 89 8 0 2
20 0 0 363 5731 5686 727 177 62 532 0 423 36 46 0 18
21 1 0 174 256 100 608 220 24 174 0 1444 85 10 0 5
22 0 0 125 259 100 566 216 12 192 0 1645 85 11 0 4
23 0 0 46 317 100 457 216 39 93 0 118 99 1 0 0
24 0 0 47 298 100 406 198 48 76 0 123 98 2 0 0
25 3 0 414 270 100 882 340 8 158 0 1736 91 8 0 0
26 1 0 155 261 100 564 213 18 190 0 1330 87 11 0 2
27 1 0 217 257 100 552 220 2 160 0 1699 91 8 0 0
28 3 0 423 259 100 840 287 13 177 0 1683 88 10 0 2
29 0 0 752 1218 1113 666 127 77 346 0 637 56 25 0 19
30 0 0 103 294 100 468 211 31 98 0 552 96 4 0 0
31 1 0 109 252 100 570 207 16 190 0 1501 86 10 0 4
What to look for
- Note the much higher intr and ithr values for CPU#20 and CPU#21. Solaris will select some CPUs to handle the system interrupts. Which CPUs and the number that are chosen depend on the I/O devices attached to the system, the physical location of those devices, and whether interrupts have been disabled on a CPU (psradmin command).
intr
– interrupts
intr
– thread interrupts (not including the clock interrupts)
csw
– Voluntary Context switches. When this number slowly increases, and the application is not IO bound, it may indicate a mutex contention.
icsw
– Involuntary Context switches. When this number increases past 500, the system is under a heavy load.
smtx
– if smtx
increases sharply, for instance from 50 to 500, it is a sign of a system resource bottleneck (ex., network or disk).
Usr
, sys
and idl
– Together, all three columns represent CPU saturation. A well-tuned application under full load (0% idle) should fall within 80% to 90% usr
, and 20% to 10% sys
times, respectively. A smaller percentage value for sys
reflects more time for user code and fewer preemptions, which result in greater throughput for a Java application.
Things to try
- Do not include CPU(s) handling interrupts in processor binds of processor sets. In the above example, CPU#20 and CPU#29 are handling interrupts. If you wanted to run 14 instances of your application, and you get the best performance from one instance from 2 CPUs, then it is reasonable to expect that creating 14 2CPU processor sets would yield the best performance. The ideal solution would be to create 13 processor sets, which don’t include the interrupt-handling CPUs, and bind 13 of the processes to the 13 processor sets. The last process would be started and allowed to run on the remaining CPUs. It is important to make available to your application as many CPUs as it can efficiently use.
- Do you see increasing
csw
? For a Java application, an increasing csw
value will most likely have to do with network use. A common cause for a high csw
value is the result of having created too many socket connections–either by not pooling connections or by handling new connections inefficiently. If this is the case you would also see a high TCP connection count when executing netstat -a | wc -l
(Refer to the netstat section).
- Do you see increasing
icsw
? A common cause of this is preemption, most likely because of an end of time slice on the CPU. For a Java application, this could be a sign that there is room for improvement in code optimization.
iostat
The iostat
tool gives statistics on the disk I/O subsystem. The iostat
command has many options. More information can be found in the man pages. The following options provide information on locating I/O bottlenecks.
iostat Output
#iostat -xn 10
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 fd0
2.7 58.2 14.6 2507.0 0.0 1.4 0.0 23.0 0 52 d0
47.3 0.0 2465.6 0.0 0.0 0.4 0.0 8.8 0 30 d1
0.0 0.1 0.0 0.1 0.0 0.0 0.0 13.1 0 0 c0t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t1d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t6d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t9d0
0.1 58.2 0.1 801.9 0.0 1.5 0.0 25.7 0 29 c1t10d0
2.1 64.4 10.5 818.8 0.0 1.6 0.0 23.5 0 38 c1t11d0
0.5 71.7 4.0 887.1 0.0 1.6 0.0 21.8 0 41 c1t12d0
92.0 0.0 1242.5 0.0 0.0 0.7 0.0 8.1 0 24 c1t13d0
84.7 0.0 1223.1 0.0 0.0 0.7 0.0 8.4 0 22 c1t14d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 thirdeye:vold(pid268)
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 fd0
2.5 94.3 14.3 2372.5 0.0 4.0 0.0 41.8 0 85 d0
50.8 2.8 2000.3 22.4 0.0 0.7 0.0 13.8 0 29 d1
0.4 2.3 2.5 17.7 0.0 0.2 0.0 82.4 0 3 c0t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t1d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t6d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t9d0
0.0 62.6 0.0 736.0 0.0 1.6 0.0 25.2 0 46 c1t10d0
1.9 60.6 9.5 746.9 0.0 2.6 0.0 41.5 0 45 c1t11d0
0.6 80.0 4.8 888.8 0.0 2.6 0.0 32.6 0 65 c1t12d0
74.8 2.4 1014.2 19.2 0.0 0.9 0.0 11.4 0 22 c1t13d0
75.7 0.4 986.1 3.2 0.0 0.5 0.0 6.7 0 20 c1t14d0
What to look for
-
%b
– Percentage of time the disk is busy (transactions in progress). Average %b values over 25 could be a bottleneck.
-
%w
– Percentage of time there are transactions waiting for service (queue non-empty).
-
asvc_t
– Reports on average response time of active transactions, in milliseconds. It is mislabeled asvc_t
; it is the time between a user process issuing a read and the read completing. Consistent values over 30ms could indicate a bottleneck.
Things to try
- For a Java application, disk bottlenecks can often be addressed by using software caches. An example of a software cache would be a JDBC result set cache, or a generated pages cache. Disk reads and writes are slow; therefore, limiting disk access is a sure way to improve performance. Problems with too much disk access are often hidden when running on Solaris because of its own file system caches. Even with Solaris file system caches, using software caches to prevent files ystem and operating system overhead is recommended.
- Mount file systems with options. (Refer to the
mount_ufs
man page). Several mount options may eliminate some disk load. Which options to try depends highly on the type of data. One possible option is noatime
, which specifies the ufs file system not to update the access time on files. This may reduce load of systems accessing read-only files or doing error logging.
- # mount -F ufs -o noatime /<your_volume>
- Add more disks to the file system. If you are using a single disk file system, upgrading to a hardware or software RAID is the next logical step. Hardware RAID is significantly faster than software RAID and is highly suggested. A software RAID solution would add additional computational (CPU) load to the system.
- Change block size. Depending on storage hardware and application behavior, there may be a better block size to use besides the ufs default of 8192k. Look at the man pages for
mkfs
and newfs
to determine ways to change block size.
netstat
The netstat
tool gives statistics on the network subsystem. It can be used to analyze many aspects of the network subsystem, two of which are the TCP/IP kernel module and the interface bandwidth. An overview of both uses is below.
netstat -I hme0 10
These netstat
options are used to analyze interface bandwidth. The upper bound (max) of the current throughput can be calculated from the output. The upper bound is reported because the netstat
output reports the metric of packets, which don’t necessarily have to be their maximum size. The upper bound of the bandwidth can be calculated using the following equation:
Bandwidth Used = (Total number of Packets) / (Polling Interval (10) ) ) * MTU (1500 default).
The current MTU for an interface can be found with: ifconfig -a
netstat -I hme0 10 Output
#netstat -I hme0 10
input hme0 output input (Total) output
packets errs packets errs colls packets errs packets errs colls
122004816 272 159722061 0 0 348585818 2582 440541305 2 2
0 0 0 0 0 84144 0 107695 0 0
0 0 0 0 0 96144 0 123734 0 0
0 0 0 0 0 89373 0 114906 0 0
0 0 0 0 0 84568 0 108759 0 0
0 0 0 0 0 84720 0 108800 0 0
0 0 0 0 0 87911 0 112803 0 0
0 0 0 0 0 99046 0 126866 0 0
0 0 0 0 0 105500 0 134260 0 0
0 0 0 0 0 96404 0 123158 0 0
0 0 0 0 0 86732 0 111010 0 0
0 0 0 0 0 87753 0 112309 0 0
0 0 0 0 0 88752 0 114405 0 0
0 0 0 0 0 96240 0 123425 0 0
0 0 0 0 0 107527 0 136866 0 0
0 0 0 0 0 100686 0 128385 0 0
0 0 0 0 0 92745 0 118790 0 0
0 0 0 0 0 95187 0 122041 0 0
0 0 0 0 0 95105 0 122998 0 0
0 0 0 0 0 104498 0 134284 0 0
0 0 0 0 0 113289 0 144882 0 0
0 0 0 0 0 103227 0 132159 0 0
0 0 0 0 0 98239 0 125220 0 0
What to look for
colls
– collisions. If your network is not switched, then a low level of collisions is expected. As the network becomes increasingly saturated, collision will increase and eventually will become a bottleneck. The best solution for collisions is a switched network.
errs
– errors. The presence of errors could indicate device errors. If your network is switched, errors indicate that you are nearly consuming the bandwidth capacity of your network. The solution to this problem is to give the system more bandwidth, which can be achieved through more network interfaces or a network bandwidth upgrade. This is highly dependent on your particular network architecture.
Things to try
- For a Java application, network saturation is difficult to address besides increasing bandwidth. If network saturation is occurring quickly (saturation at less than 8CPUs for an application server running on a 100mbit Ethernet), then an investigation to ensure conservative network usage is a good first step.
- Increase network bandwidth. If your network is not switched, the best step to take is to upgrade to a switched network. If your network is switched, first check if more network interfaces are a possible solution, otherwise upgrade to a higher bandwidth network.
netstat -sP tcp
These netstat
options are used to analyze the TCP kernel module. Many of the fields reported represent fields in the kernel module that indicate bottlenecks. These bottlenecks can be addressed using the ndd command and the tuning parameters referenced in the /etc/rc2.d/S69inet Section
netstat -sP tcp Output
#netstat -sP tcp
TCP tcpRtoAlgorithm = 4 tcpRtoMin = 400
tcpRtoMax = 60000 tcpMaxConn = -1
tcpActiveOpens = 34773 tcpPassiveOpens = 9015
tcpAttemptFails = 110 tcpEstabResets = 145
tcpCurrEstab = 106 tcpOutSegs =2338097
tcpOutDataSegs =1363583 tcpOutDataBytes =730037068
tcpRetransSegs = 531 tcpRetransBytes =139481
tcpOutAck =974222 tcpOutAckDelayed =388421
tcpOutUrg = 0 tcpOutWinUpdate = 96
tcpOutWinProbe = 53 tcpOutControl = 87975
tcpOutRsts = 666 tcpOutFastRetrans = 47
tcpInSegs =2302712
tcpInAckSegs =1148145 tcpInAckBytes =729808007
tcpInDupAck = 76300 tcpInAckUnsent = 0
tcpInInorderSegs =1828170 tcpInInorderBytes =995767266
tcpInUnorderSegs = 15155 tcpInUnorderBytes =113298
tcpInDupSegs = 1144 tcpInDupBytes =132520
tcpInPartDupSegs = 1 tcpInPartDupBytes = 416
tcpInPastWinSegs = 0 tcpInPastWinBytes = 0
tcpInWinProbe = 46 tcpInWinUpdate = 48
tcpInClosed = 251 tcpRttNoUpdate = 344
tcpRttUpdate =1105386 tcpTimRetrans = 989
tcpTimRetransDrop = 5 tcpTimKeepalive = 818
tcpTimKeepaliveProbe= 183 tcpTimKeepaliveDrop = 0
tcpListenDrop = 0 tcpListenDropQ0 = 0
tcpHalfOpenDrop = 0 tcpOutSackRetrans = 56
What to look for
tcpListenDrop
– If after several looks at the command output the tcpListenDrop
continues to increase, it could indicate a problem with queue size.
Things to try
- Increase Java application thread count. A possible cause of increasing
tcpListenDrop
is the application throughput being bottlenecked by the number of executing threads. At this point increasing application threads may be a good thing to try.
- Increase queue size. Increase the request queue sizes using
ndd
. More information on other ndd commands referenced in the /etc/rc2.d/S69inet Section
- ndd -set /dev/tcp tcp_conn_req_max_q 1024
- ndd -set /dev/tcp tcp_conn_req_max_q0 4096
netstat -a | grep <your_hostname> | wc -l
Running this command gives a rough count of socket connections on the system. There is a limit of how many connections can be open at one time; therefore, it is a good tool to use when looking for bottlenecks.
netstat -a | grep <your_hostname> | wc -l Output
#netstat -a | wc -l
34567
What to look for
- socket count – If the number returned is greater than 20,000 then the number of socket connections could be a possible bottleneck.
Things to try
- For a Java application, a common cause of too many sockets is inefficient use of sockets. It is common practice in Java applications to create a socket connection each time a request is made. Creating and destroying socket connections is not only expensive, but can cause unnecessary system overhead by creating too many sockets. Creating a connection pool may be a good solution to investigate. For an example of connection pool use, refer to Advanced Programming for the Java 2 Platform, Chapter 8.
- Decrease point where number of anonymous socket connections start.
- ndd -set /dev/tcp tcp_smallest_anon_port 1024
- Decrease the time a TCP connection stays in TIME_WAIT.
- ndd -set /dev/tcp tcp_time_wait_interval 60000
verbose:gc
The java -verbose:gc
option is a great tool for quickly diagnosing garbage collection (GC) bottlenecks. Calculate the total of all the time spent in GC by adding the time output from -verbose:gc
. If the fraction (time in GC)/( elapsed time) is a high fraction greater than 0.2, then GC is most likely a problem. If this fraction is less than 0.2, then GC is not the issue. For more detail information about JVM Garbage Collection, see Tuning Garbage Collection with the 1.3.1 Java Virtual Machine.
Java Application
Tnf traces
This is a great tool for both profiling and debugging a Java Application. On a Solaris system refer to the Manual pages for tracing, TNF_PROBE, tnfdump, tnfmerge
and prex
. This will help to get an overall understanding of inserting the probes in the source code. The manual pages have been written with C/C++ sources in view.
Here are the steps to take for a Java source:
Step 1: Insert the probes as shown in the short example below.
import java.io.*;
import java.util.*;
class probedObject{
public native void objectCreateStart();
public native void objectCreateEnd();
static {
System.loadLibrary("javaProbe");
}
}
class Main{
public static void main(String[] arg) throws Throwable
{
probedObject obj = new probedObject();
long startTime = System.currentTimeMillis();
for (int i=0; i<1000; i++) {
obj.objectCreateStart();
obj = new probedObject();
obj.objectCreateEnd();
};
System.out.println(System.currentTimeMillis()-startTime);
}
}
Step 2: Compile Main.java
#javac Main.java
Step 3: Generate .h
file
Step 2 will result in an object called probedObject.class
. Use this class to generate the .h file using JNI as follows:
#javah -jni probedObject
Step 4: Write the C routine javaProbe.c
#include <jni.h>
#include "probedObject.h"
#include <tnf/probe.h>
JNIEXPORT void JNICALL Java_probedObject_objectCreateStart(JNIEnv *env,
jobject obj){
TNF_PROBE_0(object_create_start, "object creation", "");
}
JNIEXPORT void JNICALL Java_probedObject_objectCreateEnd(JNIEnv *env,
jobject obj){
TNF_PROBE_0(object_create_end, "object creation", "");
}
Step 5: Generate the shared library
#cc -G -I/usr/java/include -I/usr/java/include/solaris javaProbe.c -o libjavaProbe.so
Step 6: Run the program under prex
.
Please note that prex
has a circular buffer as mentioned in the man pages for prex
. Use the -o
and -s
options for prex
, as needed.
darwin 69 =>prex java Main
Target process stopped
Type "continue" to resume the target, "help" for help ...
prex> enable $all
prex> continue
Target process exec'd
Step 7: Use the tnfdump
on the output trace file to get the ASCII output, or use the tnfmerge
to merge trace files. For information of TNF (Trace Normal Form) TNF
, including TNFView
and tnfmerge
, refer to Performance Profiling Using TNF.
JVMPI
The JVMPI (Java Virtual Machine Profiler Interface) is a two-way function call interface between the Java virtual machine and an in-process profiler agent. On one hand, the virtual machine notifies the profiler agent of various events, corresponding to, for example, heap allocation, thread start, etc. On the other hand, the profiler agent issues controls and requests for more information through the JVMPI. For example, the profiler agent can turn on/off a specific event notification based on the needs of the profiler front-end. A detailed overview of JVMPI can be found at Java Virtual Machine Profiler Interface (JVMPI).
Commercial Profiling Tools
Commercial and public source profiling tools are mentioned here. All of them use the JVMPI.
Tuning Parameters
Solaris 8 Tuning Parameters
Below are the Solaris 8 and JVM tuning parameters found to work best with server-side Java applications. The tuning parameters are listed with a brief description. A more in-depth look at when to use these parameters is discussed in the Analysis Tools and Tuning Process sections.
/etc/system
The table below is a list of /etc/system
tuning parameters used during the performance study. The changes are applied by appending each to the /etc/system
file and rebooting the system.
/etc/system Option |
Description |
set rlim_fd_max=8192 |
“Hard” limit on file descriptors that a single process might have open. To override this limit requires superuser privilege. |
set tcp:tcp_conn_hash_size=8192 |
Controls the hash table size in the TCP module for all TCP connections. |
set autoup=900 |
Along with tune_t_flushr, autoup controls the amount of memory examined for dirty pages in each invocation and frequency of file system sync operations.
The value of autoup is also used to control whether a buffer is written out from the free list. Buffers marked with the B_DELWRI flag (file content pages that have changed) are written out whenever the buffer has been on the list for longer than autoup seconds.
Increasing the value of autoup keeps the buffers around for a longer time in memory. |
set tune_t_fsflushr=1 |
Specifies the number of seconds between fsflush invocations. |
set rechoose_interval=150 |
Number of clock ticks before a process is deemed to have lost all affinity for the last CPU it ran on. After this interval expires, any CPU is considered a candidate for scheduling a thread. This parameter is relevant only for threads in the timesharing class. Real-time threads are scheduled on the first available CPU. |
|
A description of all /etc/system parameters can be found in the Solaris Tunable Parameters Reference Manual.
/etc/rc2.d/S69inet
Below is a list of TCP kernel tuning parameters. These are known TCP tuning parameters for high throughput Java servers. The parameters can be applied by executing each line individually with root privileges, or appending each to the /etc/rc2.d/S69inet
file and rebooting the system.
A detailed description of each of these parameters can be found in the Solaris TunableË Parameters Reference Manual.
/etc/rc2.d/S69inet Option |
Description |
ndd -set /dev/tcp tcp_xmit_hiwat 65535
ndd -set /dev/tcp tcp_recv_hiwat 65535 |
The default send window size in bytes.
The default receive window size in bytes. |
ndd -set /dev/tcp tcp_cwnd_max 65535 |
The maximum value of TCP congestion window (cwnd ) in bytes. |
ndd -set /dev/tcp tcp_rexmit_interval_min 3000 |
The default minimum retransmission timeout (RTO ) value in milliseconds. The calculated RTO for all TCP connections cannot be lower than this value. |
ndd -set /dev/tcp tcp_rexmit_interval_max 10000 |
The default maximum retransmission timeout value (RTO ) in milliseconds. The calculated RTO for all TCP connections cannot exceed this value. |
ndd -set /dev/tcp tcp_rexmit_interval_initial 3000 |
The default initial retransmission timeout value (RTO ) in milliseconds. |
ndd -set /dev/tcp tcp_time_wait_interval 60000 |
The time in milliseconds a TCP connection stays in TIME-WAIT state. Refer to RFC 1122, 4.2.2.13 for more information. |
ndd -set /dev/tcp tcp_keepalive_interval 900000 |
The time in milliseconds a TCP connection stays in KEEP-ALIVE state. Refer to RFC 1122, 4.2.2.13 for more information. |
ndd -set /dev/tcp tcp_conn_req_max_q 1024 |
The default maximum number of pending TCP connections for a TCP listener waiting to be accepted by accept(SOCKET) . |
ndd -set /dev/tcp tcp_conn_req_max_q0 4096 |
The default maximum number of incomplete (three-way handshake not yet finished) pending TCP connections for a TCP listener.
Refer to RFC 793 for more information on TCP three-way handshake. |
ndd -set /dev/tcp tcp_ip_abort_interval 60000 |
The default total retransmission timeout value for a TCP connection in milliseconds. For a given TCP connection, if TCP has been re-transmitting for tcp_ip_abort_interval period and it has not received any acknowledgment from the other endpoint during this period, TCP closes this connection. |
ndd -set /dev/tcp tcp_smallest_anon_port 1024 |
The default port number where anonymous port allocation is allowed (default: ? ). |
|
Java Application Tuning Parameters
Brief suggestions for basic Java server applications are listed below.
Number of Execution Threads
A general rule for thread count is to use as few threads as possible. The JVM performs best with the fewest busy threads. A good starting point for thread count can be found with the following equations.
(Number of Java Execution Threads) = Number of Transactions / Time(in seconds)
or
(Number of Execution Threads)=Throughput(transactions/sec)
It is important to remember that these equations give a good starting point for thread count tuning, not the best value for thread count for your application. The number of execution Threads can greatly influence performance; therefore, the proper sizing of this value is very important.
Number of Database Connections
The number of database connections, commonly known as a connection or resource pool, is closely tied to the number of execution threads. A rule of thumb is to match the number of database connections to the number of execute threads. This is a good starting point for finding the correct number of database connections. Over-configuring this value could cause unnecessary overhead to the database, while under-configuring could tie up all execution threads waiting on database I/O.
(Number of Database Connections) = (Number of Execution Threads)
Software Caches
Many server-side Java applications implement some type of software cache, commonly for JDBC result sets, or commonly generated, dynamic pages. Software caches are the most likely part of an application to cause unnecessary garbage collection overhead resulting from the software cache architecture and the replacement policy of the cache.
Most middle tier applications will have some sort of caching. These caches should be studied with GC in mind to see if they result in greater GC. Choose the architecture and replacement strategy that has lower GC. Careful implementation of caches with garbage collection in mind greatly improves performance simply by limiting garbage.
Java Virtual Machine Tuning Parameters
Below are a few Java Virtual Machine Tuning Parameters that have been found to improve performance. There are many more tuning parameters; the following are examples of what has worked for us. A detailed list of all tuning parameters can be found Java HotSpot VM Options.
Java VM Option |
Description |
-XX:+UseLWPSynchronization |
Use LWP-based instead of thread based synchronization (SPARC only). |
-XX:SurvivorRatio=40 |
Ratio of eden/survivor space size [Solaris: 64, Linux/Windows: 8]. |
-XX:NewSize=128m
-XX:MaxNewSize=128m |
Disable young generation resizing. To do this on Hotspot, simply the size of the young generation to a constant. |
-Xms=512m
-Xmx=512m |
Overall size of Heap.
.
REF : http://developers.sun.com/solaris/articles/performance_tools.html |