About Me

My photo
Rohit is an investor, startup advisor and an Application Modernization Scale Specialist working at Google.

Wednesday, October 21, 2015

Tuning the default memory setting of the Java Buildpack to avoid OOMs

Credit for this blog post largely goes to Daniel Mikusa from Pivotal Support and others who have debugged and diagnosed a number of OOM issues recently with JDK8 and Cloud Foundry Java Buildpack.

Situation

We have seen significant improvement in the performance of long runs for certain enterprise apps after decreasing the amount of memory allocated to thread stacks and increasing the native i.e. metaspace of the container. If you are seeing your app running OOM then make the following changes to the memory of the Java Buildpack:

1.) Lower the thread stack size. This defaults to right around 1M per thread, which is more than most threads every need. I usually start by lowering this to 228k per thread, which is the JVM minimum. This will work fine unless you've got apps that do lots of recursion. At any rate, if you see any StackOverFlowExceptions, then just bump up the value until they go away.

2.) The default allocation of memory weights  is 75 heap and 10 native.  I suggest starting with 60 heap and 25 native. That's probably a bit conservative, but it should leave more free room in the container which should help to prevent crashes. Lowering the heap to 60 will lower the total amount of heap space that the app can use. If this causes problems with the app, like OOME's then you might need to raise the memory limit on the container instead. Once adjusted,  we should see between 100 and 150M of free memory in the container. 

In other words for JDK8, 
MEMORY_LIMIT - ( HEAP + METASPACE + 100M ) should be between 100 & 150M.

The problem that most customers hit when moving their Java apps to CF is that they've never had a hard limit on total system memory before. Deploying to WAS/WebLogic/JBOSS (or any other application container) is going to be done on a system with swap space. That means if they don't have their memory settings quite right, the worst thing that will happen is that they use a bit of swap space. If it's just a little swap, they probably won't even notice it happening. With CF, there's much less forgiveness in the system. Exceeding the memory limit by even a byte will result in your app being killed.

How to Fix this ?

Please set the JBP_CONFIG_OPEN_JDK_JRE or the JBP_CONFIG_OPEN_JDK_JRE env var. and restage the app


cf set-env my-app JBP_CONFIG_ORACLE_JRE_MEMORY_HUERISTICS '{heap: 60, native: 25, stack:"228k”}'

cf set-env my-app JBP_CONFIG_OPEN_JDK_JRE: '[memory_calculator: {memory_sizes: {stack: 228k, heap: 225M}}]

cf se my-app JBP_CONFIG_ORACLE_JRE '[memory_calculator: {memory_sizes: {stack: 228k}, memory_heuristics: {heap: 60, native: 25}}]’

For versions of JBP pre 3.3
cf set-env my-app JBP_CONFIG_OPEN_JDK_JRE: '[memory_calculator: {memory_heuristics: {stack: .01, heap: 10, metaspace: 2}}]’

Script For Parsing NMT

If you do happen to enable -XX:NativeMemoryTracking then use the script below to graph and generate charts of memory growth:


and here are some quick docs for it.


There's one issue at the moment, the process id that the script looks for in the top output is hard coded to 35.  I think that it's usually 35, because of the order that the processes start up, but that might not always be the case.  You can manually find the pid in the log and change the script for now.  I will try to add to the script to automatically put that, but haven't had the time yet.

No comments:

Post a Comment