Relation between maximum java heapspace in MB and JVM settings

1
We are running our migrated Mendix 5.18 application in a windows environment with 16 GB memory. We have set the max. java heap space to 2560 and our JVM setting are: -XX:MaxPermSize=128M -XX:-HeapDumpOnOutOfMemoryError. (our old Mendix 4 settings) We still encounter performance issues . Sometimes the application runs very slow and sometimes the associated objects cannot retrieved. Some microflows are running also very slow....
asked
3 answers
0

Your max permsize of 128M seems rather low. My first try would be to up this to 512M. Or remove it completely (which is what works for us).

I am curious what the exact arguments are the Mendix Service on Windows passes to the JVM on application startup, I have not been able to find them anywhere. They may or may not contain a permgen size, but I would expect they do, and you're overriding it to a lower value.

answered
0

Stacktrace: An existing connection was forcibly closed by the remote host at sun.nio.ch.SocketDispatcher.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(Unknown Source) at sun.nio.ch.IOUtil.writeFromNativeBuffer(Unknown Source) at sun.nio.ch.IOUtil.write(Unknown Source) at sun.nio.ch.SocketChannelImpl.write(Unknown Source) at org.eclipse.jetty.io.nio.ChannelEndPoint.flush(ChannelEndPoint.java:293) at org.eclipse.jetty.io.nio.SelectChannelEndPoint.flush(SelectChannelEndPoint.java:362) at org.eclipse.jetty.io.nio.ChannelEndPoint.flush(ChannelEndPoint.java:341) at org.eclipse.jetty.io.nio.SelectChannelEndPoint.flush(SelectChannelEndPoint.java:336) at org.eclipse.jetty.http.HttpGenerator.flushBuffer(HttpGenerator.java:841) at org.eclipse.jetty.http.AbstractGenerator.blockForOutput(AbstractGenerator.java:523) at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:170) at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:107) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1720) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1696) at org.apache.commons.io.IOUtils.copy(IOUtils.java:1671) at com.mendix.externalinterface.connector.ResourceRequestHandler.processRequest(ResourceRequestHandler.java:59) at com.mendix.externalinterface.connector.MxRuntimeConnector.processRequest(MxRuntimeConnector.java:57) at com.mendix.core.impl.MxRuntimeImpl.processRequest(MxRuntimeImpl.java:720) at com.mendix.m2ee.appcontainer.server.handler.RuntimeHandler.handle(RuntimeHandler.java:41) at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) at org.eclipse.jetty.server.Server.handle(Server.java:368) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489) at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82) at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628) at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) at java.lang.Thread.run(Unknown Source)

answered
0

Judging from the line that says ResourceRequestHandler I would say this happened when serving a static resource (an image) to a client, but this is likely just a symptom of the real issue.

I think you need to get some proper monitoring in place to find out what the real issue is. Something like yourkit or visualvm should work and will once setup provide an easy to follow overview of memory usage.

answered