Connection Bus and out of memory usage

Please I need your help , it’s an urgent issue  The following error appears on the log and it stops the service on the production ERROR - ConnectionBus: Opening JDBC connection to failed with SQLState: null Error code: 0 Message: Cannot get a connection, pool error Timeout waiting for idle object Retrying...(1/4) In addition to this error java.lang.OutOfMemoryError: GC overhead limit exceeded I have change the ConnectionPoolingMaxActive from the mendix service console on the advance configurations to 250 but the problem is not solved !! How I can decide the best number of connections , the size of the memory needed for this number and what is the possible problems that could appear later on , please advise because i run the service on the cloud and i couldn't find a solution Is there anything that mendix can do in the backend to solve this problem !!  I tried to contact mendix support team , but no answer till now  !!!  
2 answers

The CG overhead limit exceeded error is explained in the documentation here, and it lists some common causes for your application. If you want to debug this issue, I wrote a post about that pretty recently here.


Looking at your recent posts, however, it may be that you are trying to run your application with too few resources for the number of concurrent users and specific setup. I would suggest increasing resources (most notably application memory) to resolve this issue quickly, so that you have time to debug the issue. If you deploy in the Mendix Cloud, you can contact support or your Customer Success Manager to increase resources. If you deploy on premises, you should contact your own team.


There are no hard and fast rules for deciding the optimal amount of memory and the size of the connection pool. Usually, I increase the memory of an app when I see out of memory issues. If the memory issues persist, I start debugging the find the cause. I would follow the same approach for the connection pooling issue.


Finally, if you are unable to resolve this yourself, you can always ask Mendix Expert Services for support, although they do charge a fee to help you out in such cases.


Hello Ayah, 

I will try to provide my experience on the same. Hope it will help you.

  1. Connection timed error → There are no available DB connections to perform the required operation
  2. GC overhead limit exceeded → Java cannot free up memory that is needed to perform the required operation and JVM is spending too much time and free up too little memory. 


When users are using the system, DB connections are required and made. When more requests comes in:

  1. If DB connection is not available, the requests has to wait. If it waits too long, users see gateway timedout in their browsers.
  2. If requests are waiting to be processed, then those requests will have java objects which are still active and not eligible for garbage collection. So, when java GC kicks in, it cannot free memory and throws GC over head limit. 


Solution could be, as you already figured out

  1. Increase DB connection
  2. But, when you increase your DB Connection to crazy number then you must supplement it with required INFRA. So probably moving from small or medium or large is needed, otherwise your DB server will be consuming high CPU usage, further slowing down the system. This is because, when you put more number on DB connection, then you need more CPU power to keep those connections active. 
  3. If above happens, your java might still be made to wait and GC error could happen. 


There is no hard and fast rule to find out the equation to arrive at the numbers. But despite the measures taken to increase DB Connections and INFRA, its good to spend time in analyzing logs to find out, what is causing the usage of DB connections. In my personal experience, the way the product is built, ended up with complicated query which takes a lot of time to execute the query, which holds the connections active for long time.

Raise a Mendix support support ticket and ask them to provide DB logs. When you analyse that, it might help you to find long running queries. But for this, you must first enable a constant to log the long running queries.