out of memory error follow up

Bala Nair bnairtm at comcast.net
Tue Feb 8 01:55:26 UTC 2011


Following up a post from about a month ago.  We were seeing a persistent 
slow memory leak in the rhq server in tenured gen space that eventually 
led to an out of memory exception after running the server for about a 
week.  I captured a heap dump and found hundreds of thousands of 
stateless session beans in memory.  Here's a snapshot from my profiler 
of a table of classes with greatest number of instances.

Name 	Objects 	Shallow Size 	Retained Size
java.util.HashMap$Entry 	1939755 	93108240 	189082696
java.util.HashMap$Entry[] 	1090957 	167796768 	340273520
java.util.HashMap 	1084265 	69392960 	408521632
java.util.LinkedList$Entry 	860965 	34438600 	727956072
org.jboss.ejb3.BaseSessionContext 	856281 	34251240 	34251240
org.rhq.enterprise.server.authz.RequiredPermissionsInterceptor 	856281 
13700496 	13700496
org.rhq.enterprise.server.common.TransactionInterruptInterceptor 
856281 	13700496 	13700496
org.jboss.ejb3.stateless.StatelessBeanContext 	856265 	68501200 	490959040
java.lang.String 	429025 	17161000 	48902064
char[] 	379454 	37897872 	37897872
java.lang.Integer 	171633 	4119192 	4119192
java.util.Hashtable$Entry 	157623 	7565904 	34980432
java.util.TreeMap$Entry 	105496 	6751744 	14950816
java.lang.String[] 	98401 	4340480 	6555536
org.rhq.enterprise.server.auth.SubjectManagerBean 	91116 	6560352 	49567104
org.rhq.enterprise.server.auth.TemporarySessionPasswordGenerator 
91116 	3644640 	43006752
org.rhq.enterprise.server.authz.AuthorizationManagerBean 	91115 
2186760 	2186760
org.rhq.enterprise.server.alert.AlertConditionManagerBean 	91084 
2914688 	2914688
org.rhq.enterprise.server.alert.AlertManagerBean 	90914 	9455056 	9455056
org.rhq.enterprise.server.alert.AlertDefinitionManagerBean 	90911 
4363728 	4363728
org.rhq.enterprise.server.alert.AlertConditionLogManagerBean 	90903 
5090568 	5090568
org.rhq.enterprise.server.alert.CachedConditionManagerBean 	90903 
4363344 	4363344
org.rhq.enterprise.server.alert.AlertDampeningManagerBean 	90903 
3636120 	3636120
org.jboss.security.SecurityAssociation$SubjectContext 	49229 	2362992 
2362992
org.rhq.enterprise.server.cloud.instance.ServerManagerBean 	39354 
3463152 	3463152
org.rhq.enterprise.server.cloud.CloudManagerBean 	39354 	2833488 	2833488


Here are the merged paths from the SubjectManagerBean to GCRoot:

<All the objects>
org.jboss.ejb3.stateless.StatelessBeanContext
java.util.LinkedList$Entry
java.util.LinkedList$Entry
java.util.LinkedList
org.jboss.ejb3.InfinitePool
org.jboss.ejb3.ThreadlocalPool
org.jboss.ejb3.stateless.StatelessContainer


All the other manager beans have similar merged paths.  So I started to 
wonder why there were so many slsb's in the ThreadlocalPools and after 
some digging found this (http://community.jboss.org/message/363520) 
thread that sort of describes what I'm seeing.  I still don't know why 
it's happening but it gave me something to try.  I changed the Stateless 
Bean pool class in ejb3-interceptors-aop.xml from ThreadlocalPool to 
StrictMaxPool.  Now when I run the server and watch it with my profiler 
I see at max 3 SubjectManagerBeans in memory.  Same appears to be true 
for other slsb's.  This isn't a solution to the problem but I'm hoping 
someone can shed light on what's really going on.  I would be happy to 
upload the heap dump to somewhere public but it's almost a GB in size.

Bala Nair
SeaChange International


-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://fedorahosted.org/pipermail/rhq-devel/attachments/20110207/b64f5bba/attachment.html 


More information about the rhq-devel mailing list