Any insights from VisualVm analysis?

2
Hello, I am a newbie when it comes to heap analysis. All I see that hsqlDB classes and ehCache are using way too much memory than they should. Secondly, the javaw.exe memory usage just keeps growing, until it hits OutOfMem error. I have zero-ed it down to one microflow which gets executed every 5 seconds. It does XML to domain mapping, uses ~10 non-persistable entities, and commit the few DB entities that are really needed. Everything that's persistable is either committed or rollbacked. Please see the attached snapshot of heap dump analysis, and let me know some pointers on where I should focus efforts. Thanks. http://s18.postimg.org/87mlvv4w9/jvm.png
asked
2 answers
2

Upgrade to Mendix 4.8? From the release notes:

Tickets 22391, 22393: The Mendix object cache now allows an unlimited number of objects per session (previously: 3,000,000). In addition, all Java action types are automatically monitored and garbage collected, now including actions executed with a System context and asynchronous actions, as long as they are being executed using the Core.execute()/Core.executeAsync() API. See https://world.mendix.com/display/refguide4/Garbage+collection for more information about the object cache and garbage collection. NOTE: Projects using large numbers of non-persistent objects and/or using system contexts in Java are strongly recommended to upgrade to version 4.8 to ensure proper handling of these cases. As always, perform proper testing when upgrading your project before putting it in production.

answered
0

It might be worth seeing if the objects that are loaded to memory in that microflow still have (soft) references that prevent them from being collected (under 'Instances' in VVM).

Also; have a look at your memory charts and (if possible) GC logs. Do your GC and young/old generations behave normally when the problematic microflow is not active? What happens when the microflow starts running repeatedly? One gradual rise to the point where you hit OOM?

answered