Java Server JVM's Always Need To Be Tuned

Jul 29, 2015

If you are running Java on servers or put another way using JavaEE infrastructure and have not tuned your JVM, you are almost certainly under performing or consuming more resources than needed or both.  It is now 15 years since I started working on Java based servers via the JRun servlet container, a commercial product which was sunsetted a few years ago.  Also I have worked on Tomcat, Websphere and to a lesser extent, Weblogic.  All of these containers and more to the point JVM's needed tuning and most of the results of that tuning, the benefits, are quite startling.

Here are two screenshots obtained by analyzing garbage collection logs (GC logs), as a note point, there are two different sets of arguments we can pass to the JVM to 
generate GC logs, depending on the JVM version, as follows.

For high update versions of the Oracle 1.7x and the 1.8x JVM we can use these,which will rotate and archive the logs...

-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=3 -XX:GCLogFileSize=10240k -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -verbose:gc -Xloggc:javatuneGC.log

For lower update versions of the 1.7 Oracle JVM and earlier versions we use these...

-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -verbose:gc -Xloggc:javatuneGC.log

Depending on which collector is being used, the log output will look something like this...

{Heap before GC invocations=25276 (full 89):
 par new generation   total 3774912K, used 3390380K [0x00000005c0000000, 0x00000006c0000000, 0x00000006c0000000)
  eden space 3355520K,  97% used [0x00000005c0000000, 0x0000000687fa3af8, 0x000000068cce0000)
  from space 419392K,  27% used [0x00000006a6670000, 0x00000006ad5b7520, 0x00000006c0000000)
  to   space 419392K,   0% used [0x000000068cce0000, 0x000000068cce0000, 0x00000006a6670000)
 concurrent mark-sweep generation total 4194304K, used 1040787K [0x00000006c0000000, 0x00000007c0000000, 0x00000007c0000000)
 Metaspace       used 159884K, capacity 182127K, committed 191564K, reserved 1218560K
  class space    used 16293K, capacity 20718K, committed 21920K, reserved 1048576K
334902.876: [GC (Allocation Failure) 334902.876: [ParNew: 3390380K->206892K(3774912K), 0.0789400 secs] 4431167K->1247829K(7969216K), 0.0792090 secs] [Times: user=0.30 sys=0.00, real=0.08 secs] 

So back to the screenshots from the GCViewer utility and these are from client assignments I was called in to help.

In this first case, we see that I would call "thrashing" is the garbage collection activity

In this second case we are seeing horrendously long full garbage collections that large black rectangle is one that lasted 75 seconds.  In terms of a Full GC which produces 
a "stop the world" event, as all other JVM activity pauses, 75 seconds is a very long time indeed.

So in both these cases, the server overall performance was compromised and as often is the case the client had tried to increase hardware resources either via adding RAM 
or even major hardware.  

The one thing to bear in mind is that the JVM is literally the heart of all JavaEE equipment and it is literally never tuned adequately "out of the box" because there are
so many pieces of equipment that use the JVM, of all different shapes, sizes, applications etc.  

Here are some overall tips that can help -

Finally, here is a gc log viewed in GC Viewer where performance is good...



About Mike Brunt

Mike has been working since 2001 on all things Java server-side. This includes, troubleshooting, tuning and infrastructure design, engineering and migration. More ...


Monthly Archives

Favorite Links

Tag Cloud

jvm java java containers capacity planning load testing internet of things blockchain android