recommended settings: ConnectionPoolingMaxActive

1
Hi, Recently we encounter issues with maximum number of database connections reached (after going live with 4.8.9. and a new deployment containing a lot of new functionality). On 4.7.2. release (and less functionality) we did not encounter the issue. We encounter serious performance issues at the end of each working day (during the day all works fine). What is the recommended setting for the ConnectionPoolingMaxActive parameter? We are running on SQL server 2008 with approx 100-150 users working in the application at the same time. Found some notes on the forum indicating that e.g. the deeplink module could be causing issues (we upgraded to the most recent version). Unfortunately still encounter the issue. Thanks. 20150112 EDIT: One of our microflows containing a custom java action caused a lock on a table, blocking any access to the table. Mendix (support) recommends keeping the setting ConnectionPoolingMaxActive set to 50. Only in very exceptional cases there might be a need to increase the number of connections. You can use the show runtime threads in the Server console to get a general view on the current activities in the application, we noticed an extreme increase of threads at the moment we encountered the issue of running out of maxconnections.
asked
2 answers
2

The best advice is really dependent on the actual behaviour (do you have graphs etc?) of your database server.

The ConnectionPoolingMaxActive is primarily a safeguard to stop a runaway application from crashing your database as well. For example, when you do inefficient select queries that start using 100% cpu, or when you don't have enough disk cache in the operating system, and it starts reading from disk like crazy, then using even more active connections will only make the situation worse.

It seems that in your situation, the usage of connections is caused by inactive transactions that are waiting. This can be caused by changing the same single object in two different running actions. The second change will then block in postgresql until you either commit or abort the transaction in which the first change was made.

So, fixing the application would be the best to do. You can of course play with the connection pool settings as quickfix until the java action is fixed.

Also using more active connections will cause the postgresql processes to use more memory itself for backend processes (memory in which it's doing the joins etc), which reduces the amount of disk cache, which can lead to slowdowns because of disk activity if the active working set of the database cannot fit in memory any more.

answered
0

We had issues like these when there was to little memory for Postgres. When you have multiple app engines you can define how to divide the memory. In this case to much was given to the model and to little to the database. The error went away when postgres had more memory.

Regards, Ronald

answered